Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Sub-tasks:
language-modeling
Languages:
code
Size:
10K - 100K
License:
repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
sequence | types
sequence |
---|---|---|---|---|
pk-ai/training | machine-learning/deep-learning/udacity/ud730/1_notmnist.ipynb | mit | [
"Deep Learning\nAssignment 1\nThe objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.\nThis notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.",
"# These are all the modules we'll be using later. Make sure you can import them\n# before proceeding further.\nfrom __future__ import print_function\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport os\nimport sys\nimport tarfile\nfrom IPython.display import display, Image\nfrom scipy import ndimage\nfrom sklearn.linear_model import LogisticRegression\nfrom six.moves.urllib.request import urlretrieve\nfrom six.moves import cPickle as pickle\n\n# Config the matplotlib backend as plotting inline in IPython\n%matplotlib inline",
"First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine.",
"url = 'https://commondatastorage.googleapis.com/books1000/'\nlast_percent_reported = None\ndata_root = '.' # Change me to store data elsewhere\n\ndef download_progress_hook(count, blockSize, totalSize):\n \"\"\"A hook to report the progress of a download. This is mostly intended for users with\n slow internet connections. Reports every 5% change in download progress.\n \"\"\"\n global last_percent_reported\n percent = int(count * blockSize * 100 / totalSize)\n\n if last_percent_reported != percent:\n if percent % 5 == 0:\n sys.stdout.write(\"%s%%\" % percent)\n sys.stdout.flush()\n else:\n sys.stdout.write(\".\")\n sys.stdout.flush()\n \n last_percent_reported = percent\n \ndef maybe_download(filename, expected_bytes, force=False):\n \"\"\"Download a file if not present, and make sure it's the right size.\"\"\"\n dest_filename = os.path.join(data_root, filename)\n if force or not os.path.exists(dest_filename):\n print('Attempting to download:', filename) \n filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)\n print('\\nDownload Complete!')\n statinfo = os.stat(dest_filename)\n if statinfo.st_size == expected_bytes:\n print('Found and verified', dest_filename)\n else:\n raise Exception(\n 'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')\n return dest_filename\n\ntrain_filename = maybe_download('notMNIST_large.tar.gz', 247336696)\ntest_filename = maybe_download('notMNIST_small.tar.gz', 8458043)",
"Extract the dataset from the compressed .tar.gz file.\nThis should give you a set of directories, labeled A through J.",
"num_classes = 10\nnp.random.seed(133)\n\ndef maybe_extract(filename, force=False):\n root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz\n if os.path.isdir(root) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping extraction of %s.' % (root, filename))\n else:\n print('Extracting data for %s. This may take a while. Please wait.' % root)\n tar = tarfile.open(filename)\n sys.stdout.flush()\n tar.extractall(data_root)\n tar.close()\n data_folders = [\n os.path.join(root, d) for d in sorted(os.listdir(root))\n if os.path.isdir(os.path.join(root, d))]\n if len(data_folders) != num_classes:\n raise Exception(\n 'Expected %d folders, one per class. Found %d instead.' % (\n num_classes, len(data_folders)))\n print(data_folders)\n return data_folders\n \ntrain_folders = maybe_extract(train_filename)\ntest_folders = maybe_extract(test_filename)",
"Problem 1\nLet's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.",
"# Solution for Problem 1\nimport random\nprint('Displaying images of train folders')\n# Looping through train folders and displaying a random image of each folder\nfor path in train_folders:\n image_file = os.path.join(path, random.choice(os.listdir(path)))\n display(Image(filename=image_file))\n\nprint('Displaying images of test folders')\n# Looping through train folders and displaying a random image of each folder\nfor path in test_folders:\n image_file = os.path.join(path, random.choice(os.listdir(path)))\n display(Image(filename=image_file))",
"Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.\nWe'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. \nA few images might not be readable, we'll just skip them.",
"image_size = 28 # Pixel width and height.\npixel_depth = 255.0 # Number of levels per pixel.\n\ndef load_letter(folder, min_num_images):\n \"\"\"Load the data for a single letter label.\"\"\"\n image_files = os.listdir(folder)\n dataset = np.ndarray(shape=(len(image_files), image_size, image_size),\n dtype=np.float32)\n print(folder)\n num_images = 0\n for image in image_files:\n image_file = os.path.join(folder, image)\n try:\n image_data = (ndimage.imread(image_file).astype(float) - \n pixel_depth / 2) / pixel_depth\n if image_data.shape != (image_size, image_size):\n raise Exception('Unexpected image shape: %s' % str(image_data.shape))\n dataset[num_images, :, :] = image_data\n num_images = num_images + 1\n except IOError as e:\n print('Could not read:', image_file, ':', e, '- it\\'s ok, skipping.')\n \n dataset = dataset[0:num_images, :, :]\n if num_images < min_num_images:\n raise Exception('Many fewer images than expected: %d < %d' %\n (num_images, min_num_images))\n \n print('Full dataset tensor:', dataset.shape)\n print('Mean:', np.mean(dataset))\n print('Standard deviation:', np.std(dataset))\n return dataset\n \ndef maybe_pickle(data_folders, min_num_images_per_class, force=False):\n dataset_names = []\n for folder in data_folders:\n set_filename = folder + '.pickle'\n dataset_names.append(set_filename)\n if os.path.exists(set_filename) and not force:\n # You may override by setting force=True.\n print('%s already present - Skipping pickling.' % set_filename)\n else:\n print('Pickling %s.' % set_filename)\n dataset = load_letter(folder, min_num_images_per_class)\n try:\n with open(set_filename, 'wb') as f:\n pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)\n except Exception as e:\n print('Unable to save data to', set_filename, ':', e)\n \n return dataset_names\n\ntrain_datasets = maybe_pickle(train_folders, 45000)\ntest_datasets = maybe_pickle(test_folders, 1800)",
"Problem 2\nLet's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.",
"# Solution for Problem 2\ndef show_first_image(datasets):\n for pickl in datasets:\n print('Showing a first image from pickle ', pickl)\n try:\n with open(pickl, 'rb') as f:\n letter_set = pickle.load(f)\n plt.imshow(letter_set[0])\n except Exception as e:\n print('Unable to show image from pickle ', pickl, ':', e)\n raise\nprint('From Training dataset')\nshow_first_image(train_datasets)\nprint('From Test Dataset')\nshow_first_image(test_datasets)",
"Problem 3\nAnother check: we expect the data to be balanced across classes. Verify that.",
"def show_dataset_shape(datasets):\n for pickl in datasets:\n try:\n with open(pickl, 'rb') as f:\n letter_set = pickle.load(f)\n print('Shape of pickle ', pickl, 'is', np.shape(letter_set))\n except Exception as e:\n print('Unable to show image from pickle ', pickl, ':', e)\n raise\n\nprint('Shape for Training set')\nshow_dataset_shape(train_datasets)\nprint('Shape for Test set')\nshow_dataset_shape(test_datasets)",
"Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.\nAlso create a validation dataset for hyperparameter tuning.",
"def make_arrays(nb_rows, img_size):\n if nb_rows:\n dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)\n labels = np.ndarray(nb_rows, dtype=np.int32)\n else:\n dataset, labels = None, None\n return dataset, labels\n\ndef merge_datasets(pickle_files, train_size, valid_size=0):\n num_classes = len(pickle_files)\n valid_dataset, valid_labels = make_arrays(valid_size, image_size)\n train_dataset, train_labels = make_arrays(train_size, image_size)\n vsize_per_class = valid_size // num_classes\n tsize_per_class = train_size // num_classes\n \n start_v, start_t = 0, 0\n end_v, end_t = vsize_per_class, tsize_per_class\n end_l = vsize_per_class+tsize_per_class\n for label, pickle_file in enumerate(pickle_files): \n try:\n with open(pickle_file, 'rb') as f:\n letter_set = pickle.load(f)\n # let's shuffle the letters to have random validation and training set\n np.random.shuffle(letter_set)\n if valid_dataset is not None:\n valid_letter = letter_set[:vsize_per_class, :, :]\n valid_dataset[start_v:end_v, :, :] = valid_letter\n valid_labels[start_v:end_v] = label\n start_v += vsize_per_class\n end_v += vsize_per_class\n \n train_letter = letter_set[vsize_per_class:end_l, :, :]\n train_dataset[start_t:end_t, :, :] = train_letter\n train_labels[start_t:end_t] = label\n start_t += tsize_per_class\n end_t += tsize_per_class\n except Exception as e:\n print('Unable to process data from', pickle_file, ':', e)\n raise\n \n return valid_dataset, valid_labels, train_dataset, train_labels\n \n\"\"\"\ntrain_size = 200000\nvalid_size = 10000\ntest_size = 10000\n\"\"\" \ntrain_size = 20000\nvalid_size = 1000\ntest_size = 1000\n\nvalid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(\n train_datasets, train_size, valid_size)\n_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)\n\nprint('Training:', train_dataset.shape, train_labels.shape)\nprint('Validation:', valid_dataset.shape, valid_labels.shape)\nprint('Testing:', test_dataset.shape, test_labels.shape)",
"Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.",
"def randomize(dataset, labels):\n permutation = np.random.permutation(labels.shape[0])\n shuffled_dataset = dataset[permutation,:,:]\n shuffled_labels = labels[permutation]\n return shuffled_dataset, shuffled_labels\ntrain_dataset, train_labels = randomize(train_dataset, train_labels)\ntest_dataset, test_labels = randomize(test_dataset, test_labels)\nvalid_dataset, valid_labels = randomize(valid_dataset, valid_labels)",
"Problem 4\nConvince yourself that the data is still good after shuffling!",
"print('Printing Train, validation and test labels after shuffling')\ndef print_first_10_labels(labels):\n printing_labels = []\n for i in range(10):\n printing_labels.append(labels[[i]])\n print(printing_labels)\nprint_first_10_labels(train_labels)\nprint_first_10_labels(test_labels)\nprint_first_10_labels(valid_labels)",
"Finally, let's save the data for later reuse:",
"pickle_file = os.path.join(data_root, 'notMNIST.pickle')\n\ntry:\n f = open(pickle_file, 'wb')\n save = {\n 'train_dataset': train_dataset,\n 'train_labels': train_labels,\n 'valid_dataset': valid_dataset,\n 'valid_labels': valid_labels,\n 'test_dataset': test_dataset,\n 'test_labels': test_labels,\n }\n pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)\n f.close()\nexcept Exception as e:\n print('Unable to save data to', pickle_file, ':', e)\n raise\n\nstatinfo = os.stat(pickle_file)\nprint('Compressed pickle size:', statinfo.st_size)",
"Problem 5\nBy construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.\nMeasure how much overlap there is between training, validation and test samples.\nOptional questions:\n- What about near duplicates between datasets? (images that are almost identical)\n- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.\n\n\nProblem 6\nLet's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.\nTrain a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.\nOptional question: train an off-the-shelf model on all the data!",
"logreg_model_clf = LogisticRegression()\nnsamples, nx, ny = train_dataset.shape\nd2_train_dataset = train_dataset.reshape((nsamples,nx*ny))\nlogreg_model_clf.fit(d2_train_dataset, train_labels)\nfrom sklearn.metrics import accuracy_score\nnsamples, nx, ny = valid_dataset.shape\nd2_valid_dataset = valid_dataset.reshape((nsamples,nx*ny))\nprint(\"validation accuracy,\", accuracy_score(valid_labels, logreg_model_clf.predict(d2_valid_dataset)))\nnsamples, nx, ny = test_dataset.shape\nd2_train_dataset = test_dataset.reshape((nsamples,nx*ny))\nprint(\"test accuracy,\", accuracy_score(test_labels, logreg_model_clf.predict(d2_train_dataset)))"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mattgiguere/doglodge | code/.ipynb_checkpoints/bf_qt_scraping-checkpoint.ipynb | mit | [
"bf_qt_scraping\nThis notebook describes how hotel data can be scraped using PyQT.\nThe items we want to extract are:\n- the hotels for a given city\n- links to each hotel page\n- text hotel summary\n- text hotel description\nOnce the links for each hotel are determined, I then want to extract the following items pertaining to each review:\n- title\n- author\n- text\n- rating",
"import sys \nfrom PyQt4.QtGui import * \nfrom PyQt4.QtCore import * \nfrom PyQt4.QtWebKit import * \nfrom lxml import html \n\nclass Render(QWebPage): \n def __init__(self, url): \n self.app = QApplication(sys.argv) \n QWebPage.__init__(self) \n self.loadFinished.connect(self._loadFinished) \n self.mainFrame().load(QUrl(url)) \n self.app.exec_() \n\n def _loadFinished(self, result): \n self.frame = self.mainFrame() \n self.app.quit() \n \n def update_url(self, url):\n self.mainFrame().load(QUrl(url)) \n self.app.exec_() \n \n\nurl = 'http://www.bringfido.com/lodging/city/new_haven_ct_us' \n#This does the magic.Loads everything\nr = Render(url) \n#result is a QString.\nresult = r.frame.toHtml()\n\n# result\n\n#QString should be converted to string before processed by lxml\nformatted_result = str(result.toAscii())\n\n#Next build lxml tree from formatted_result\ntree = html.fromstring(formatted_result)\n\ntree.text_content\n\n#Now using correct Xpath we are fetching URL of archives\narchive_links = tree.xpath('//*[@id=\"results_list\"]/div')\nprint archive_links\n\nurl = 'http://pycoders.com/archive/' \nr = Render(url) \nresult = r.frame.toHtml()\n\n#QString should be converted to string before processed by lxml\nformatted_result = str(result.toAscii())\n\ntree = html.fromstring(formatted_result)\n\n#Now using correct Xpath we are fetching URL of archives\narchive_links = tree.xpath('//*[@class=\"campaign\"]/a/@href')\n\n# for lnk in archive_links:\n# print(lnk)",
"Now the Hotels",
"url = 'http://www.bringfido.com/lodging/city/new_haven_ct_us' \nr = Render(url) \nresult = r.frame.toHtml()\n\n#QString should be converted to string before processed by lxml\nformatted_result = str(result.toAscii())\n\ntree = html.fromstring(formatted_result)\n\n#Now using correct Xpath we are fetching URL of archives\narchive_links = tree.xpath('//*[@id=\"results_list\"]/div')\n\nprint(archive_links)\nprint('')\n\nfor lnk in archive_links:\n print(lnk.xpath('div[2]/h1/a/text()')[0])\n print(lnk.text_content())\n print('*'*25)\n",
"Now Get the Links",
"links = []\nfor lnk in archive_links:\n print(lnk.xpath('div/h1/a/@href')[0])\n links.append(lnk.xpath('div/h1/a/@href')[0])\n print('*'*25)\n\nlnk.xpath('//*/div/h1/a/@href')[0]\n\nlinks",
"Loading Reviews\nNext, we want to step through each page, and scrape the reviews for each hotel.",
"url_base = 'http://www.bringfido.com' \nr.update_url(url_base+links[0]) \nresult = r.frame.toHtml()\n\n#QString should be converted to string before processed by lxml\nformatted_result = str(result.toAscii())\n\ntree = html.fromstring(formatted_result)\n\nhotel_description = tree.xpath('//*[@class=\"body\"]/text()')\n\ndetails = tree.xpath('//*[@class=\"address\"]/text()')\n\naddress = details[0]\ncsczip = details[1]\nphone = details[2]\n\n#Now using correct Xpath we are fetching URL of archives\nreviews = tree.xpath('//*[@class=\"review_container\"]')\n\ntexts = []\ntitles = []\nauthors = []\nratings = []\n\nprint(reviews)\nprint('')\nfor rev in reviews:\n titles.append(rev.xpath('div/div[1]/text()')[0])\n authors.append(rev.xpath('div/div[2]/text()')[0])\n texts.append(rev.xpath('div/div[3]/text()')[0])\n ratings.append(rev.xpath('div[2]/img/@src')[0].split('/')[-1][0:1])\n print(rev.xpath('div[2]/img/@src')[0].split('/')[-1][0:1])\n\n\ntitles\n\nauthors\n\ntexts\n\nratings"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tritemio/multispot_paper | out_notebooks/usALEX-5samples-E-corrected-all-ph-out-12d.ipynb | mit | [
"Executed: Mon Mar 27 11:39:24 2017\nDuration: 7 seconds.\nusALEX-5samples - Template\n\nThis notebook is executed through 8-spots paper analysis.\nFor a direct execution, uncomment the cell below.",
"ph_sel_name = \"None\"\n\ndata_id = \"12d\"\n\n# data_id = \"7d\"",
"Load software and filenames definitions",
"from fretbursts import *\n\ninit_notebook()\nfrom IPython.display import display",
"Data folder:",
"data_dir = './data/singlespot/'\n\nimport os\ndata_dir = os.path.abspath(data_dir) + '/'\nassert os.path.exists(data_dir), \"Path '%s' does not exist.\" % data_dir",
"List of data files:",
"from glob import glob\nfile_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)\n## Selection for POLIMI 2012-11-26 datatset\nlabels = ['17d', '27d', '7d', '12d', '22d']\nfiles_dict = {lab: fname for lab, fname in zip(labels, file_list)}\nfiles_dict\n\ndata_id",
"Data load\nInitial loading of the data:",
"d = loader.photon_hdf5(filename=files_dict[data_id])",
"Load the leakage coefficient from disk:",
"leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'\nleakage = np.loadtxt(leakage_coeff_fname)\n\nprint('Leakage coefficient:', leakage)",
"Load the direct excitation coefficient ($d_{exAA}$) from disk:",
"dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'\ndir_ex_aa = np.loadtxt(dir_ex_coeff_fname)\n\nprint('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)",
"Load the gamma-factor ($\\gamma$) from disk:",
"gamma_fname = 'results/usALEX - gamma factor - all-ph.csv'\ngamma = np.loadtxt(gamma_fname)\n\nprint('Gamma-factor:', gamma)",
"Update d with the correction coefficients:",
"d.leakage = leakage\nd.dir_ex = dir_ex_aa\nd.gamma = gamma",
"Laser alternation selection\nAt this point we have only the timestamps and the detector numbers:",
"d.ph_times_t[0][:3], d.ph_times_t[0][-3:]#, d.det_t\n\nprint('First and last timestamps: {:10,} {:10,}'.format(d.ph_times_t[0][0], d.ph_times_t[0][-1]))\nprint('Total number of timestamps: {:10,}'.format(d.ph_times_t[0].size))",
"We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:",
"d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)",
"We should check if everithing is OK with an alternation histogram:",
"plot_alternation_hist(d)",
"If the plot looks good we can apply the parameters with:",
"loader.alex_apply_period(d)\n\nprint('D+A photons in D-excitation period: {:10,}'.format(d.D_ex[0].sum()))\nprint('D+A photons in A-excitation period: {:10,}'.format(d.A_ex[0].sum()))",
"Measurements infos\nAll the measurement data is in the d variable. We can print it:",
"d",
"Or check the measurements duration:",
"d.time_max",
"Compute background\nCompute the background using automatic threshold:",
"d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)\n\ndplot(d, timetrace_bg)\n\nd.rate_m, d.rate_dd, d.rate_ad, d.rate_aa",
"Burst search and selection",
"d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))\n\nprint(d.ph_sel)\ndplot(d, hist_fret);\n\n# if data_id in ['7d', '27d']:\n# ds = d.select_bursts(select_bursts.size, th1=20)\n# else:\n# ds = d.select_bursts(select_bursts.size, th1=30)\n\nds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)\n\nn_bursts_all = ds.num_bursts[0]\n\ndef select_and_plot_ES(fret_sel, do_sel):\n ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)\n ds_do = ds.select_bursts(select_bursts.ES, **do_sel)\n bpl.plot_ES_selection(ax, **fret_sel)\n bpl.plot_ES_selection(ax, **do_sel) \n return ds_fret, ds_do\n\nax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)\n\nif data_id == '7d':\n fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)\n do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True) \n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n \nelif data_id == '12d':\n fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)\n do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n\nelif data_id == '17d':\n fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)\n do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)\n\nelif data_id == '22d':\n fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)\n do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) \n\nelif data_id == '27d':\n fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)\n do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)\n ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel) \n\nn_bursts_do = ds_do.num_bursts[0]\nn_bursts_fret = ds_fret.num_bursts[0]\n\nn_bursts_do, n_bursts_fret\n\nd_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)\nprint('D-only fraction:', d_only_frac)\n\ndplot(ds_fret, hist2d_alex, scatter_alpha=0.1);\n\ndplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);",
"Donor Leakage fit",
"bandwidth = 0.03\n\nE_range_do = (-0.1, 0.15)\nE_ax = np.r_[-0.2:0.401:0.0002]\n\nE_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size', \n x_range=E_range_do, x_ax=E_ax, save_fitter=True)\n\nmfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])\nplt.xlim(-0.3, 0.5)\nprint(\"%s: E_peak = %.2f%%\" % (ds.ph_sel, E_pr_do_kde*100))",
"Burst sizes",
"nt_th1 = 50\n\ndplot(ds_fret, hist_size, which='all', add_naa=False)\nxlim(-0, 250)\nplt.axvline(nt_th1)\n\nTh_nt = np.arange(35, 120)\nnt_th = np.zeros(Th_nt.size)\nfor i, th in enumerate(Th_nt):\n ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)\n nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th\n\nplt.figure()\nplot(Th_nt, nt_th)\nplt.axvline(nt_th1)\n\nnt_mean = nt_th[np.where(Th_nt == nt_th1)][0]\nnt_mean",
"Fret fit\nMax position of the Kernel Density Estimation (KDE):",
"E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')\nE_fitter = ds_fret.E_fitter\n\nE_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])\nE_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))\n\nE_fitter.fit_res[0].params.pretty_print()\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(E_fitter, ax=ax[0])\nmfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])\nprint('%s\\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))\ndisplay(E_fitter.params*100)",
"Weighted mean of $E$ of each burst:",
"ds_fret.fit_E_m(weights='size')",
"Gaussian fit (no weights):",
"ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)",
"Gaussian fit (using burst size as weights):",
"ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')\n\nE_kde_w = E_fitter.kde_max_pos[0]\nE_gauss_w = E_fitter.params.loc[0, 'center']\nE_gauss_w_sig = E_fitter.params.loc[0, 'sigma']\nE_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))\nE_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr\nE_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr",
"Stoichiometry fit\nMax position of the Kernel Density Estimation (KDE):",
"S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)\nS_fitter = ds_fret.S_fitter\n\nS_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])\nS_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(S_fitter, ax=ax[0])\nmfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])\nprint('%s\\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))\ndisplay(S_fitter.params*100)\n\nS_kde = S_fitter.kde_max_pos[0]\nS_gauss = S_fitter.params.loc[0, 'center']\nS_gauss_sig = S_fitter.params.loc[0, 'sigma']\nS_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))\nS_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr\nS_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr",
"The Maximum likelihood fit for a Gaussian population is the mean:",
"S = ds_fret.S[0]\nS_ml_fit = (S.mean(), S.std())\nS_ml_fit",
"Computing the weighted mean and weighted standard deviation we get:",
"weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)\nS_mean = np.dot(weights, S)/weights.sum()\nS_std_dev = np.sqrt(\n np.dot(weights, (S - S_mean)**2)/weights.sum())\nS_wmean_fit = [S_mean, S_std_dev]\nS_wmean_fit",
"Save data to file",
"sample = data_id",
"The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.",
"variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '\n 'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '\n 'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '\n 'E_pr_do_kde nt_mean\\n')",
"This is just a trick to format the different variables:",
"variables_csv = variables.replace(' ', ',')\nfmt_float = '{%s:.6f}'\nfmt_int = '{%s:d}'\nfmt_str = '{%s}'\nfmt_dict = {**{'sample': fmt_str}, \n **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}\nvar_dict = {name: eval(name) for name in variables.split()}\nvar_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\\n'\ndata_str = var_fmt.format(**var_dict)\n\nprint(variables_csv)\nprint(data_str)\n\n# NOTE: The file name should be the notebook name but with .csv extension\nwith open('results/usALEX-5samples-E-corrected-all-ph.csv', 'a') as f:\n f.seek(0, 2)\n if f.tell() == 0:\n f.write(variables_csv)\n f.write(data_str)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tritemio/multispot_paper | out_notebooks/usALEX-5samples-PR-raw-dir_ex_aa-fit-out-AexAem-17d.ipynb | mit | [
"Executed: Mon Mar 27 11:38:07 2017\nDuration: 10 seconds.\nusALEX-5samples - Template\n\nThis notebook is executed through 8-spots paper analysis.\nFor a direct execution, uncomment the cell below.",
"ph_sel_name = \"AexAem\"\n\ndata_id = \"17d\"\n\n# ph_sel_name = \"all-ph\"\n# data_id = \"7d\"",
"Load software and filenames definitions",
"from fretbursts import *\n\ninit_notebook()\nfrom IPython.display import display",
"Data folder:",
"data_dir = './data/singlespot/'",
"Check that the folder exists:",
"import os\ndata_dir = os.path.abspath(data_dir) + '/'\nassert os.path.exists(data_dir), \"Path '%s' does not exist.\" % data_dir",
"List of data files in data_dir:",
"from glob import glob\n\nfile_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)\nfile_list\n\n## Selection for POLIMI 2012-12-6 dataset\n# file_list.pop(2)\n# file_list = file_list[1:-2]\n# display(file_list)\n# labels = ['22d', '27d', '17d', '12d', '7d']\n\n## Selection for P.E. 2012-12-6 dataset\n# file_list.pop(1)\n# file_list = file_list[:-1]\n# display(file_list)\n# labels = ['22d', '27d', '17d', '12d', '7d']\n\n## Selection for POLIMI 2012-11-26 datatset\nlabels = ['17d', '27d', '7d', '12d', '22d']\n\nfiles_dict = {lab: fname for lab, fname in zip(labels, file_list)}\nfiles_dict\n\nph_sel_map = {'all-ph': Ph_sel('all'), 'AexAem': Ph_sel(Aex='Aem')}\nph_sel = ph_sel_map[ph_sel_name]\n\ndata_id, ph_sel_name",
"Data load\nInitial loading of the data:",
"d = loader.photon_hdf5(filename=files_dict[data_id])",
"Laser alternation selection\nAt this point we have only the timestamps and the detector numbers:",
"d.ph_times_t, d.det_t",
"We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:",
"d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)",
"We should check if everithing is OK with an alternation histogram:",
"plot_alternation_hist(d)",
"If the plot looks good we can apply the parameters with:",
"loader.alex_apply_period(d)",
"Measurements infos\nAll the measurement data is in the d variable. We can print it:",
"d",
"Or check the measurements duration:",
"d.time_max",
"Compute background\nCompute the background using automatic threshold:",
"d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)\n\ndplot(d, timetrace_bg)\n\nd.rate_m, d.rate_dd, d.rate_ad, d.rate_aa",
"Burst search and selection",
"from mpl_toolkits.axes_grid1 import AxesGrid\nimport lmfit\nprint('lmfit version:', lmfit.__version__)\n\nassert d.dir_ex == 0\nassert d.leakage == 0\n\nd.burst_search(m=10, F=6, ph_sel=ph_sel)\n\nprint(d.ph_sel, d.num_bursts)\n\nds_sa = d.select_bursts(select_bursts.naa, th1=30)\nds_sa.num_bursts",
"Preliminary selection and plots",
"mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30\nds_saw = d.select_bursts_mask_apply([mask])\n\nds_sas0 = ds_sa.select_bursts(select_bursts.S, S2=0.10)\nds_sas = ds_sa.select_bursts(select_bursts.S, S2=0.15)\nds_sas2 = ds_sa.select_bursts(select_bursts.S, S2=0.20)\nds_sas3 = ds_sa.select_bursts(select_bursts.S, S2=0.25)\n\nds_st = d.select_bursts(select_bursts.size, add_naa=True, th1=30)\nds_sas.num_bursts\n\ndx = ds_sas0\nsize = dx.na[0] + dx.nd[0]\ns_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)\ns_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])\nplot(s_ax, s_hist, '-o', alpha=0.5)\n\ndx = ds_sas\nsize = dx.na[0] + dx.nd[0]\ns_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)\ns_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])\nplot(s_ax, s_hist, '-o', alpha=0.5)\n\ndx = ds_sas2\nsize = dx.na[0] + dx.nd[0]\ns_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)\ns_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])\nplot(s_ax, s_hist, '-o', alpha=0.5)\n\ndx = ds_sas3\nsize = dx.na[0] + dx.nd[0]\ns_hist, s_bins = np.histogram(size, bins=np.r_[-15 : 25 : 1], density=True)\ns_ax = s_bins[:-1] + 0.5*(s_bins[1] - s_bins[0])\nplot(s_ax, s_hist, '-o', alpha=0.5)\n\nplt.title('(nd + na) for A-only population using different S cutoff');\n\ndx = ds_sa\n\nalex_jointplot(dx);\n\ndplot(ds_sa, hist_S)",
"A-direct excitation fitting\nTo extract the A-direct excitation coefficient we need to fit the \nS values for the A-only population.\nThe S value for the A-only population is fitted with different methods:\n- Histogram git with 2 Gaussians or with 2 asymmetric Gaussians \n(an asymmetric Gaussian has right- and left-side of the peak\ndecreasing according to different sigmas).\n- KDE maximum\nIn the following we apply these methods using different selection\nor weighting schemes to reduce amount of FRET population and make\nfitting of the A-only population easier.\nEven selection\nHere A-only and FRET population are evenly selected.",
"dx = ds_sa\n\nbin_width = 0.03\nbandwidth = 0.03\nbins = np.r_[-0.2 : 1 : bin_width]\nx_kde = np.arange(bins.min(), bins.max(), 0.0002)\n\n## Weights\nweights = None\n\n## Histogram fit\nfitter_g = mfit.MultiFitter(dx.S)\nfitter_g.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])\nfitter_g.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))\nS_hist_orig = fitter_g.hist_pdf\n\nS_2peaks = fitter_g.params.loc[0, 'p1_center']\ndir_ex_S2p = S_2peaks/(1 - S_2peaks)\nprint('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p)\n\n## KDE\nfitter_g.calc_kde(bandwidth=bandwidth)\nfitter_g.find_kde_max(x_kde, xmin=0, xmax=0.15)\n\nS_peak = fitter_g.kde_max_pos[0]\ndir_ex_S_kde = S_peak/(1 - S_peak)\nprint('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde)\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\n\nmfit.plot_mfit(fitter_g, ax=ax[0])\nax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))\n\nmfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=True)\nax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak*100));\n\n## 2-Asym-Gaussian\nfitter_ag = mfit.MultiFitter(dx.S)\nfitter_ag.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])\nfitter_ag.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.1, p2_center=0.4))\n#print(fitter_ag.fit_obj[0].model.fit_report())\n\nS_2peaks_a = fitter_ag.params.loc[0, 'p1_center']\ndir_ex_S2pa = S_2peaks_a/(1 - S_2peaks_a)\nprint('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2pa)\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\n\nmfit.plot_mfit(fitter_g, ax=ax[0])\nax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks*100))\n\nmfit.plot_mfit(fitter_ag, ax=ax[1])\nax[1].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_a*100));",
"Zero threshold on nd\nSelect bursts with:\n$$n_d < 0$$.",
"dx = ds_sa.select_bursts(select_bursts.nd, th1=-100, th2=0)\n\nfitter = bext.bursts_fitter(dx, 'S')\nfitter.fit_histogram(model = mfit.factory_gaussian(center=0.1))\nS_1peaks_th = fitter.params.loc[0, 'center']\ndir_ex_S1p = S_1peaks_th/(1 - S_1peaks_th)\nprint('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S1p)\n\nmfit.plot_mfit(fitter)\nplt.xlim(-0.1, 0.6)",
"Selection 1\nBursts are weighted using $w = f(S)$, where the function $f(S)$ is a\nGaussian fitted to the $S$ histogram of the FRET population.",
"dx = ds_sa\n\n## Weights\nweights = 1 - mfit.gaussian(dx.S[0], fitter_g.params.loc[0, 'p2_center'], fitter_g.params.loc[0, 'p2_sigma'])\nweights[dx.S[0] >= fitter_g.params.loc[0, 'p2_center']] = 0\n\n## Histogram fit\nfitter_w1 = mfit.MultiFitter(dx.S)\nfitter_w1.weights = [weights]\nfitter_w1.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])\nfitter_w1.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))\nS_2peaks_w1 = fitter_w1.params.loc[0, 'p1_center']\ndir_ex_S2p_w1 = S_2peaks_w1/(1 - S_2peaks_w1)\nprint('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w1)\n\n## KDE\nfitter_w1.calc_kde(bandwidth=bandwidth)\nfitter_w1.find_kde_max(x_kde, xmin=0, xmax=0.15)\nS_peak_w1 = fitter_w1.kde_max_pos[0]\ndir_ex_S_kde_w1 = S_peak_w1/(1 - S_peak_w1)\nprint('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w1)\n\ndef plot_weights(x, weights, ax):\n ax2 = ax.twinx()\n x_sort = x.argsort()\n ax2.plot(x[x_sort], weights[x_sort], color='k', lw=4, alpha=0.4)\n ax2.set_ylabel('Weights');\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\nmfit.plot_mfit(fitter_w1, ax=ax[0])\nmfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)\nplot_weights(dx.S[0], weights, ax=ax[0])\nax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w1*100))\n\nmfit.plot_mfit(fitter_w1, ax=ax[1], plot_model=False, plot_kde=True)\nmfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)\nplot_weights(dx.S[0], weights, ax=ax[1])\nax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w1*100));",
"Selection 2\nBursts are here weighted using weights $w$:\n$$w = n_{aa} - |n_a + n_d|$$",
"## Weights\nsizes = dx.nd[0] + dx.na[0] #- dir_ex_S_kde_w3*dx.naa[0]\nweights = dx.naa[0] - abs(sizes)\nweights[weights < 0] = 0\n\n## Histogram\nfitter_w4 = mfit.MultiFitter(dx.S)\nfitter_w4.weights = [weights]\nfitter_w4.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])\nfitter_w4.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))\nS_2peaks_w4 = fitter_w4.params.loc[0, 'p1_center']\ndir_ex_S2p_w4 = S_2peaks_w4/(1 - S_2peaks_w4)\nprint('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w4)\n\n## KDE\nfitter_w4.calc_kde(bandwidth=bandwidth)\nfitter_w4.find_kde_max(x_kde, xmin=0, xmax=0.15)\nS_peak_w4 = fitter_w4.kde_max_pos[0]\ndir_ex_S_kde_w4 = S_peak_w4/(1 - S_peak_w4)\nprint('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w4)\n\nfig, ax = plt.subplots(1, 2, figsize=(14, 4.5))\n\nmfit.plot_mfit(fitter_w4, ax=ax[0])\nmfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)\n#plot_weights(dx.S[0], weights, ax=ax[0])\nax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w4*100))\n\nmfit.plot_mfit(fitter_w4, ax=ax[1], plot_model=False, plot_kde=True)\nmfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)\n#plot_weights(dx.S[0], weights, ax=ax[1])\nax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w4*100));",
"Selection 3\nBursts are here selected according to:\n$$n_{aa} - |n_a + n_d| > 30$$",
"mask = (d.naa[0] - np.abs(d.na[0] + d.nd[0])) > 30\nds_saw = d.select_bursts_mask_apply([mask])\nprint(ds_saw.num_bursts)\n\ndx = ds_saw\n\n## Weights\nweights = None\n\n## 2-Gaussians\nfitter_w5 = mfit.MultiFitter(dx.S)\nfitter_w5.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])\nfitter_w5.fit_histogram(model = mfit.factory_two_gaussians(p1_center=0.1, p2_center=0.4))\nS_2peaks_w5 = fitter_w5.params.loc[0, 'p1_center']\ndir_ex_S2p_w5 = S_2peaks_w5/(1 - S_2peaks_w5)\nprint('Fitted direct excitation (na/naa) [2-Gauss]:', dir_ex_S2p_w5)\n\n## KDE\nfitter_w5.calc_kde(bandwidth=bandwidth)\nfitter_w5.find_kde_max(x_kde, xmin=0, xmax=0.15)\nS_peak_w5 = fitter_w5.kde_max_pos[0]\nS_2peaks_w5_fiterr = fitter_w5.fit_res[0].params['p1_center'].stderr\ndir_ex_S_kde_w5 = S_peak_w5/(1 - S_peak_w5)\nprint('Fitted direct excitation (na/naa) [KDE]: ', dir_ex_S_kde_w5)\n\n## 2-Asym-Gaussians\nfitter_w5a = mfit.MultiFitter(dx.S)\nfitter_w5a.histogram(bins=np.r_[-0.2 : 1.2 : bandwidth])\nfitter_w5a.fit_histogram(model = mfit.factory_two_asym_gaussians(p1_center=0.05, p2_center=0.3))\nS_2peaks_w5a = fitter_w5a.params.loc[0, 'p1_center']\ndir_ex_S2p_w5a = S_2peaks_w5a/(1 - S_2peaks_w5a)\n#print(fitter_w5a.fit_obj[0].model.fit_report(min_correl=0.5))\nprint('Fitted direct excitation (na/naa) [2-Asym-Gauss]:', dir_ex_S2p_w5a)\n\nfig, ax = plt.subplots(1, 3, figsize=(19, 4.5))\n\nmfit.plot_mfit(fitter_w5, ax=ax[0])\nmfit.plot_mfit(fitter_g, ax=ax[0], plot_model=False, plot_kde=False)\nax[0].set_title('2-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5*100))\n\nmfit.plot_mfit(fitter_w5, ax=ax[1], plot_model=False, plot_kde=True)\nmfit.plot_mfit(fitter_g, ax=ax[1], plot_model=False, plot_kde=False)\nax[1].set_title('KDE fit (S_fit = %.2f %%)' % (S_peak_w5*100));\n\nmfit.plot_mfit(fitter_w5a, ax=ax[2])\nmfit.plot_mfit(fitter_g, ax=ax[2], plot_model=False, plot_kde=False)\nax[2].set_title('2-Asym-Gaussians fit (S_fit = %.2f %%)' % (S_2peaks_w5a*100));",
"Save data to file",
"sample = data_id\nn_bursts_aa = ds_sas.num_bursts[0]",
"The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.",
"variables = ('sample n_bursts_aa dir_ex_S1p dir_ex_S_kde dir_ex_S2p dir_ex_S2pa '\n 'dir_ex_S2p_w1 dir_ex_S_kde_w1 dir_ex_S_kde_w4 dir_ex_S_kde_w5 dir_ex_S2p_w5 dir_ex_S2p_w5a '\n 'S_2peaks_w5 S_2peaks_w5_fiterr\\n')",
"This is just a trick to format the different variables:",
"variables_csv = variables.replace(' ', ',')\nfmt_float = '{%s:.6f}'\nfmt_int = '{%s:d}'\nfmt_str = '{%s}'\nfmt_dict = {**{'sample': fmt_str}, \n **{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}\nvar_dict = {name: eval(name) for name in variables.split()}\nvar_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\\n'\ndata_str = var_fmt.format(**var_dict)\n\nprint(variables_csv)\nprint(data_str)\n\n# NOTE: The file name should be the notebook name but with .csv extension\nwith open('results/usALEX-5samples-PR-raw-dir_ex_aa-fit-%s.csv' % ph_sel_name, 'a') as f:\n f.seek(0, 2)\n if f.tell() == 0:\n f.write(variables_csv)\n f.write(data_str)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Juan-Mateos/coll_int_ai_case | notebooks/ml_topic_analysis_exploration.ipynb | mit | [
"Prototype pipeline for the analysis of ML arxiv data\nWe query arxiv to get papers, and then run them against Crossref event data to find social media discussion and Microsoft Academic Knowledge to find institutional affiliations\n```\nQuery Arxiv -> Paper repository -> Analysis -> Topic model -> Classify\n | |\n | |----> Social network analysis of researchers\n | |----> Geocoding of institutions (via GRID?)\n |\n Extract author data from Google Scholar ----> Geocode institution via Google Places API?\n | |\n Enrich paper data with MAK(?) |---> Spatial and network analysis\n |\n Obtain Crossref Event data\n```\nPreamble",
"%matplotlib inline\n\n#Some imports\nimport time\n#import xml.etree.ElementTree as etree\nfrom lxml import etree\nimport feedparser\n\n#Imports\n#Key imports are loaded from my profile (see standard_imports.py in src folder).\n\n#Paths\n\n#Paths\ntop = os.path.dirname(os.getcwd())\n\n#External data (to download the GRID database)\next_data = os.path.join(top,'data/external')\n\n#Interim data (to place seed etc)\nint_data = os.path.join(top,'data/interim')\n\n#Figures\nfig_path = os.path.join(top,'reports')\n\n#Models\nmod_path = os.path.join(top,'models')\n\n\n#Get date for saving files\ntoday = datetime.datetime.today()\n\ntoday_str = \"_\".join([str(x) for x in [today.day,today.month,today.year]])\n\n\n#Functions",
"1. Get Arxiv data about machine learning\n\nWrite a APi querier and extract papers with the terms machine learning or artificial intelligence. Get 2000 results... and play nice!",
"class Arxiv_querier():\n '''\n This class takes as an input a query and the number of results, and returns all the parsed results.\n Includes routines to deal with multiple pages of results.\n\n '''\n \n def __init__(self,base_url=\"http://export.arxiv.org/api/query?\"):\n '''\n Initialise\n '''\n \n self.base_url = base_url\n \n def query(self,query_string,max_results=100,wait_time=3):\n '''\n Query the base url\n \n '''\n #Attribute query string\n \n #Load base URL\n base_url = self.base_url\n \n #Prepare query string\n processed_query = re.sub(' ','+',query_string)\n \n self.query_string=\"_\".join(query_string.split(\" \"))\n \n start=0\n pages = 0\n \n #Run the query and store results for as long as the number of results is bigger than the max results\n keep_running = True\n \n result_store = []\n \n while keep_running==True:\n pages +=1\n print(pages)\n \n #Query url (NB the start arg, which will change as we go through different\n #pages)\n query_url = base_url+'search_query=all:{q}&start={s}&max_results={max_res}'.format(\n q=processed_query,s=start,max_res=max_results)\n \n \n #Download\n source = requests.get(query_url)\n \n #Parse the xml and get the entries (papers)\n parsed = feedparser.parse(source.content)\n \n #Extract entries\n entries = parsed['entries']\n \n #If the number of entries is bigger than the maximum number of results\n #this means we need to go to another page. We do that by offseting the\n #start with max results\n \n result_store.append(entries)\n \n if len(entries)==max_results:\n start+=max_results\n \n #If we have less than max results this means we have run out of \n #results and we toggle the keep_running switch off.\n if len(entries)<max_results:\n keep_running=False\n \n time.sleep(wait_time)\n \n #Save results in a flat list\n self.entry_results = [x for el in result_store for x in el]\n \n def extract_data(self):\n '''\n Here we extract data from the entries \n \n '''\n \n #Load entries\n entries = self.entry_results\n \n #Create df\n output = pd.concat([pd.DataFrame({\n 'query':self.query_string,\n 'id':x['id'],\n 'link':x['link'],\n 'title':x['title'],\n 'authors':\", \".join([el['name'] for el in x['authors']]),\n 'summary':x['summary'],\n 'updated':x['updated'],\n 'published':x['published'],\n 'category':x['arxiv_primary_category']['term'],\n 'pdf':str([el['href'] for el in x['links'] if el['type']=='application/pdf'][0]\n )},index=[0]) for x in entries]).reset_index(drop=True)\n \n output['year_published'] = [x.split(\"-\")[0] for x in output['published']]\n \n self.output_df = output\n\nquery_terms = ['artificial intelligence','machine learning','deep learning']\n\n\n#There are some inconsistencies in the number of results so we run the query three times for each\n#term and remove duplicated results\n\ndef extract_arxiv_data(term,max_results=1000,wait_time=10, tests=3):\n '''\n This function initialises the Arxiv_querier class, extracts the data and outputs it\n \n '''\n print(term)\n \n collected = []\n \n #We collect the data thrice\n for i in np.arange(tests):\n print('run'+ ' ' +str(i))\n initialised = Arxiv_querier()\n initialised.query(term,max_results,wait_time)\n initialised.extract_data()\n out = initialised.output_df\n collected.append(out)\n \n #We concatenate the dfs and remove the duplicates.\n \n output = pd.concat(collected)\n output_no_dupes = output.drop_duplicates('id')\n \n #Return both\n return([output,output_no_dupes])\n\n\narxiv_ai_results_three = [extract_arxiv_data(term=q) for q in query_terms]\n\nall_papers = pd.concat([x[1] for x in arxiv_ai_results_three]).drop_duplicates('id').reset_index(drop=True)\nprint(all_papers.shape)\nall_papers.head()\n\nall_papers.to_csv(int_data+'/{today}_ai_papers.csv'.format(today=today_str),index=False)",
"2. Some exploratory analysis",
"from nltk.corpus import stopwords\nfrom nltk.tokenize import word_tokenize, sent_tokenize, RegexpTokenizer, PunktSentenceTokenizer\nfrom nltk.stem import WordNetLemmatizer, SnowballStemmer, PorterStemmer\nimport scipy\nimport ast\nimport string as st\nfrom bs4 import BeautifulSoup\n\nimport gensim\nfrom gensim.models.coherencemodel import CoherenceModel\nfrom sklearn.feature_extraction.text import TfidfVectorizer\nfrom itertools import product\n\nstopwords_c = stopwords.words('english')\nstemmer = PorterStemmer()\nlemmatizer= WordNetLemmatizer()\n\n#Read papers\nall_papers = pd.read_csv(int_data+'/19_8_2017_ai_papers.csv'.format(today=today_str))\n\n#Let's begin by looking at years\n\n#When where they published?\n\n#Year distribution\nyear_pubs = all_papers['year_published'].value_counts()\nyear_pubs.index = [int(x) for x in year_pubs.index]\n\nfig,ax = plt.subplots(figsize=(10,5))\n\nyear_pubs_sorted = year_pubs[sorted(year_pubs.index)]\nyear_pubs_subset = year_pubs_sorted[year_pubs_sorted.index>1991]\n\nax.plot(np.arange(1993,2018),year_pubs_subset.cumsum(),color='red')\nax.bar(np.arange(1993,2018),year_pubs_subset)\nax.hlines(xmin=1993,xmax=2017,y=[10000,20000,30000,40000],colors='green',linestyles='dashed',alpha=0.7)\n\n\nax.set_title(\"Papers on AI, ML and DL, total per year (bar) and cumulative (red)\",size=14)\n\n\n#What are the categories of the papers? Are we capturing what we think we are capturing\n#Top 20\nall_papers['category'].value_counts()[:20]",
"See <a href='https://arxiv.org/help/api/user-manual'>here</a> for abbreviations of categories.\nIn a nutshell, AI is AI, LG is 'Learning', CV is 'Computer Vision', 'CL' is 'computation and language' and NE is 'Neural and Evolutionary computing'. SL.ML is kind of self-explanatory. We seem to be picking up the main things",
"#NB do we want to remove hyphens?\npunct = re.sub('-','',st.punctuation)\n\ndef comp_sentence(sentence):\n '''\n Takes a sentence and pre-processes it.\n The output is the sentence as a bag of words\n \n '''\n #Remove line breaks and hyphens\n sentence = re.sub('\\n',' ',sentence)\n sentence = re.sub('-',' ',sentence)\n \n #Lowercase and tokenise\n text_lowered = [x.lower() for x in sentence.split(\" \")]\n \n #Remove signs and digits\n text_no_signs_digits = [\"\".join([x for x in el if x not in punct+st.digits]) for \n el in text_lowered]\n \n #Remove stop words, single letters\n text_stopped = [w for w in text_no_signs_digits if w not in stopwords_c and\n len(w)>1]\n \n #Stem\n text_lemmatised = [lemmatizer.lemmatize(w) for w in text_stopped]\n \n #Output\n return(text_lemmatised)\n\n#Process text\nclean_corpus = [comp_sentence(x) for x in all_papers['summary']]\n\n#We remove rate words\nword_freqs = pd.Series([x for el in clean_corpus for x in el]).value_counts()\n\nword_freqs[:30]\n\nrare_words = word_freqs.index[word_freqs<=2]\nrare_words[:10]",
"Lots of the rare words seem to be typos and so forth. We remove them",
"#Removing rare words\nclean_corpus_no_rare = [[x for x in el if x not in rare_words] for el in clean_corpus]",
"2 NLP (topic modelling & word embeddings)",
"#Identify 2-grams (frequent in science!)\nbigram_transformer = gensim.models.Phrases(clean_corpus_no_rare)\n\n#Train the model on the corpus\n\n#Let's do a bit of grid search\n\n#model = gensim.models.Word2Vec(bigram_transformer[clean_corpus], size=360, window=15, min_count=2, iter=20)\n\nmodel.most_similar('ai_safety')\n\nmodel.most_similar('complexity')\n\nmodel.most_similar('github')\n\n#Create 3 different dictionaries and bows depending on word sizes\n\ndef remove_words_below_threshold(corpus,threshold):\n '''\n Takes a list of terms and removes any which are below a threshold of occurrences\n \n '''\n #Produce token frequencies\n token_frequencies = pd.Series([x for el in corpus for x in el]).value_counts()\n \n #Identify tokens to drop (below a threshold)\n tokens_to_drop = token_frequencies.index[token_frequencies<=threshold]\n \n #Processed corpus\n processed_corpus = [[x for x in el if x not in tokens_to_drop] for el in corpus]\n \n #Dictionary\n dictionary = gensim.corpora.Dictionary(processed_corpus)\n corpus_bow = [dictionary.doc2bow(x) for x in processed_corpus]\n \n return([dictionary,corpus_bow,processed_corpus])\n\n#Initial model run to see what comes out.\n\n#Transform corpus to bigrams\ntransformed_corpus = bigram_transformer[clean_corpus]\n\ncorpora_to_process = {str(x):remove_words_below_threshold(transformed_corpus,x) for x in [1,2,5,10]}\n\n#Need to turn this into a function.\n#Topic modelling\n\n#Parameters for Grid search.\nlda_params = list(product([100,200,300],[2,5]))\n\n#Model container\nlda_models = []\n\nfor x in lda_params:\n #Print stage\n print('{x}_{y}'.format(x=x[0],y=x[1]))\n \n #Load corpus and dict\n \n dictionary = corpora_to_process[str(x[1])][0]\n corpus_bow = corpora_to_process[str(x[1])][1]\n corpus = corpora_to_process[str(x[1])][2]\n \n print('training')\n #Train model\n mod = gensim.models.LdaModel(corpus_bow,num_topics=x[0],id2word=dictionary,\n passes=10,iterations=50)\n \n print('coherence')\n #Extract coherence\n cm = CoherenceModel(mod,texts=corpus,\n dictionary=dictionary,coherence='u_mass')\n \n #Get value\n try:\n coherence_value = cm.get_coherence()\n except:\n print('coherence_error')\n coherence_value='error'\n \n \n lda_models.append([x,mod,[coherence_value,cm]])\n\nwith open(mod_path+'/{t}_ai_topic_models.p'.format(t=today_str),'wb') as outfile:\n pickle.dump(lda_models,outfile)\n\n#Visualiase model performance\n\nmodel_eval = pd.DataFrame([[x[0][0],x[0][1],x[2][0]] for x in lda_models],columns=['topics','word_lim','coherence'])\n\nfig,ax = plt.subplots(figsize=(10,5))\n\ncols = ['red','green','blue']\nlegs = []\n\nfor num,x in enumerate(set(model_eval['word_lim'])):\n \n subset = model_eval.loc[[z == x for z in model_eval['word_lim']],:]\n \n ax.plot(subset.loc[:,'topics'],subset.loc[:,'coherence'],color=cols[num-1])\n \n legs.append([cols[num-1],x]) \n\nax.legend(labels=[x[1] for x in legs],title='Min word count')\nax.set_title('Model performance with different parameters')\n\nwith open(mod_path+'/19_8_2017_ai_topic_models.p','rb') as infile:\n lda_models = pickle.load(infile)\n\ncheck_model= lda_models[1][1]\n\n#Explore topics via LDAvis\nimport pyLDAvis.gensim\n\npyLDAvis.enable_notebook()\npyLDAvis.gensim.prepare(\n #Insert best model/corpus/topics here \n check_model, \n corpora_to_process[str(5)][1],\n corpora_to_process[str(5)][0])\n\n#Can we extract the relevant terms for the topics as in Sievert and Shirley in order to name them?\n\n#First - create a matrix with top 30 terms per topic\ntop_30_kws = [check_model.get_topic_terms(topicid=n,topn=1000) for n in np.arange(0,100)]\n\n#Keyword df where the columns are tokens and the rows are topics\ntop_30_kws_df = pd.concat([pd.DataFrame([x[1] for x in el],\n index=[x[0] for x in el]) for el in top_30_kws],\n axis=1).fillna(0).T.reset_index(drop=True)\n\n#This is the dictionary\nselected_dictionary = corpora_to_process[str(5)][0]\n\n#Total number of terms in the document\ntotal_terms = np.sum([vals for vals in selected_dictionary.dfs.values()])\n\n#Appearances of different terms\ndocument_freqs = pd.Series([v for v in selected_dictionary.dfs.values()],\n index=[k for k in selected_dictionary.dfs.keys()])[top_30_kws_df.columns]/total_terms\n\n#Normalise the terms (divide the vector of probabilities of each keywords in each topic by the totals)\ntop_30_kws_normalised = top_30_kws_df.apply(lambda x: x/document_freqs,axis=0)\n\n#Now we want to extract, for each topic, the relevance score.\n\ndef relevance_score(prob_in_topic,prob_in_corpus,id2word_lookup,lambda_par = 0.6):\n '''\n Combines the probabilities using the definition in Sievert and Shirley and returns the top 5 named\n #terms for each topic \n '''\n #Create dataframe\n combined = pd.concat([prob_in_topic,prob_in_corpus],axis=1)\n \n combined.columns=['prob_in_topic','prob_in_corpus']\n \n #Create relevance metric\n combined['relevance'] = lambda_par*combined['prob_in_topic'] + (1-lambda_par)*combined['prob_in_corpus']\n \n #Top words\n top_ids = list(combined.sort_values('relevance',ascending=False).index[:5])\n \n #Top words\n top_words = \"_\".join([id2word_lookup[this_id] for this_id in top_ids])\n \n return(top_words)\n\n\nrelevance_scores = [relevance_score(top_30_kws_df.iloc[n,:],\n top_30_kws_normalised.iloc[n,:],\n dictionary.id2token,lambda_par=0.6) for n in np.arange(len(top_30_kws_df))]\n\n%%time\n#Create a df with the topic predictions.\npaper_preds = check_model[corpora_to_process[str(5)][1]]\n\npaper_topics_df = pd.concat([pd.DataFrame([x[1] for x in el],index=[x[0] for x in el]) for el in paper_preds],\n axis=1).T\n\n#Replace NAs with zeros and drop pointless index\npaper_topics_df.fillna(value=0,inplace=True)\npaper_topics_df.reset_index(drop=True,inplace=True)\n\npaper_topics_df.columns = relevance_scores\n\npaper_topics_df.to_csv(int_data+'/{t}_paper_topic_mix.csv'.format(t=today_str),index=False)\n\n#paper_topics_df = pd.read_csv(int_data+'/{t}_paper_topic_mix.csv')\n\n#Quick test of Deep learning papers\n\n#These are papers with a topic that seems to capture deep learning\ndl_papers = [x>0.05 for x in paper_topics_df['network_training_model_deep_deep_learning']]\n\ndl_papers_metadata = pd.concat([pd.Series(dl_papers),all_papers],axis=1)\n\npaper_frequencies = pd.crosstab(dl_papers_metadata.year_published,dl_papers_metadata[0])\n\npaper_frequencies.columns=['no_dl','dl']\n\n\nfig,ax = plt.subplots(figsize=(10,5))\n\npaper_frequencies.plot.bar(stacked=True,ax=ax)\nax.set_title('Number of papers in the DL \\'topic\\'')\nax.legend(labels=['Not ANN/DL related','NN/DL topic >0.05'])",
"Some of this is interesting. Doesn't seem to be picking up the policy related terms (safety, discrimination)\nNext stages - focus on policy related terms. Can we look for papers in keyword dictionaries identified through the word embeddings?\nObtain Google Scholar data",
"#How many authors are there in the data? Can we collect all their institutions from Google Scholar\n\npaper_authors = pd.Series([x for el in all_papers['authors'] for x in el.split(\", \")])\n\npaper_authors_unique = paper_authors.drop_duplicates()\n\nlen(paper_authors_unique)",
"We have 68,000 authors. It might take a while to get their data from Google Scholar",
"#Top authors and frequencies\n\nauthors_freq = paper_authors.value_counts()\n\nfig,ax=plt.subplots(figsize=(10,3))\n\nax.hist(authors_freq,bins=30)\nax.set_title('Distribution of publications')\n\n#Pretty skewed distribution!\nprint(authors_freq.describe())\n\nnp.sum(authors_freq>2)",
"Less than 10,000 authors with 3+ papers in the data",
"get_scholar_data(\n\n%%time\n#Test run\nimport scholarly\n\[email protected](max_calls=30,time_interval=60)\ndef get_scholar_data(scholarly_object):\n '''''' \n try:\n scholarly_object = next(scholarly_object)\n metadata = {}\n metadata['name']=scholarly_object.name\n metadata['affiliation'] = scholarly_object.affiliation\n metadata['interests'] = scholarly_object.interests\n return(metadata)\n \n except:\n return('nothing')\n \n\n#Extract information from each query (it is a generator)\n#Get data\n\n#ml_author_gscholar=[]\n\nfor num,x in enumerate(paper_authors_unique[1484:]):\n if num % 100 == 0:\n print(str(num)+\":\"+x) \n\n result = get_scholar_data(scholarly.search_author(x))\n ml_author_gscholar.append(result)\n\nlen(ml_author_gscholar)"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jmschrei/pomegranate | tutorials/old/Tutorial_7_Parallelization.ipynb | mit | [
"pomegranate and parallelization\npomegranate supports parallelization through a set of built in functions based off of joblib. All computationally intensive functions in pomegranate are implemented in cython with the global interpreter lock (GIL) released, allowing for multithreading to be used for efficient parallel processing. The following functions can be called for parallelization:\n\nfit\nsummarize\npredict\npredict_proba\npredict_log_proba\nlog_probability\nprobability\n\nThese functions can all be simply parallelized by passing in n_jobs=X to the method calls. This tutorial will demonstrate how to use those calls. First we'll look at a simple multivariate Gaussian mixture model, and compare its performance to sklearn. Then we'll look at a hidden Markov model with Gaussian emissions, and lastly we'll look at a mixture of Gaussian HMMs. These can all utilize the build-in parallelization that pomegranate has.\nLet's dive right in!",
"%pylab inline\nfrom sklearn.mixture import GaussianMixture\nfrom pomegranate import *\nimport seaborn, time\nseaborn.set_style('whitegrid')\n\ndef create_dataset(n_samples, n_dim, n_classes, alpha=1):\n \"\"\"Create a random dataset with n_samples in each class.\"\"\"\n \n X = numpy.concatenate([numpy.random.normal(i*alpha, 1, size=(n_samples, n_dim)) for i in range(n_classes)])\n y = numpy.concatenate([numpy.zeros(n_samples) + i for i in range(n_classes)])\n idx = numpy.arange(X.shape[0])\n numpy.random.shuffle(idx)\n return X[idx], y[idx]",
"1. General Mixture Models\npomegranate has a very efficient implementation of mixture models, particularly Gaussian mixture models. Lets take a look at how fast pomegranate is versus sklearn, and then see how much faster parallelization can get it to be.",
"n, d, k = 1000000, 5, 3\nX, y = create_dataset(n, d, k)\n\nprint \"sklearn GMM\"\n%timeit GaussianMixture(n_components=k, covariance_type='full', max_iter=15, tol=1e-10).fit(X)\nprint \nprint \"pomegranate GMM\"\n%timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=15, stop_threshold=1e-10)\nprint\nprint \"pomegranate GMM (4 jobs)\"\n%timeit GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, n_jobs=4, max_iterations=15, stop_threshold=1e-10)",
"It looks like on a large dataset not only is pomegranate faster than sklearn at performing 15 iterations of EM on 3 million 5 dimensional datapoints with 3 clusters, but the parallelization is able to help in speeding things up. \nLets now take a look at the time it takes to make predictions using GMMs. Lets fit the model to a small amount of data, and then predict a larger amount of data drawn from the same underlying distributions.",
"d, k = 25, 2\nX, y = create_dataset(1000, d, k)\na = GaussianMixture(k, n_init=1, max_iter=25).fit(X)\nb = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=25)\n\ndel X, y\nn = 1000000\nX, y = create_dataset(n, d, k)\n\nprint \"sklearn GMM\"\n%timeit -n 1 a.predict_proba(X)\nprint\nprint \"pomegranate GMM\"\n%timeit -n 1 b.predict_proba(X)\nprint\nprint \"pomegranate GMM (4 jobs)\"\n%timeit -n 1 b.predict_proba(X, n_jobs=4)",
"It looks like pomegranate can be slightly slower than sklearn when using a single processor, but that it can be parallelized to get faster performance. At the same time, predictions at this level happen so quickly (millions per second) that this may not be the most reliable test for parallelization.\nTo ensure that we're getting the exact same results just faster, lets subtract the predictions from each other and make sure that the sum is equal to 0.",
"print (b.predict_proba(X) - b.predict_proba(X, n_jobs=4)).sum()",
"Great, no difference between the two.\nLets now make sure that pomegranate and sklearn are learning basically the same thing. Lets fit both models to some 2 dimensional 2 component data and make sure that they both extract the underlying clusters by plotting them.",
"d, k = 2, 2\nX, y = create_dataset(1000, d, k, alpha=2)\na = GaussianMixture(k, n_init=1, max_iter=25).fit(X)\nb = GeneralMixtureModel.from_samples(MultivariateGaussianDistribution, k, X, max_iterations=25)\n\ny1, y2 = a.predict(X), b.predict(X)\n\nplt.figure(figsize=(16,6))\nplt.subplot(121)\nplt.title(\"sklearn clusters\", fontsize=14)\nplt.scatter(X[y1==0, 0], X[y1==0, 1], color='m', edgecolor='m')\nplt.scatter(X[y1==1, 0], X[y1==1, 1], color='c', edgecolor='c')\n\nplt.subplot(122)\nplt.title(\"pomegranate clusters\", fontsize=14)\nplt.scatter(X[y2==0, 0], X[y2==0, 1], color='m', edgecolor='m')\nplt.scatter(X[y2==1, 0], X[y2==1, 1], color='c', edgecolor='c')",
"It looks like we're getting the same basic results for the two. The two algorithms are initialized a bit differently, and so it can be difficult to directly compare the results between them, but it looks like they're getting roughly the same results.\n3. Multivariate Gaussian HMM\nNow let's move on to training a hidden Markov model with multivariate Gaussian emissions with a diagonal covariance matrix. We'll randomly generate some Gaussian distributed numbers and use pomegranate with either one or four threads to fit our model to the data.",
"X = numpy.random.randn(1000, 500, 50)\n\nprint \"pomegranate Gaussian HMM (1 job)\"\n%timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5)\nprint\nprint \"pomegranate Gaussian HMM (2 jobs)\"\n%timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5, n_jobs=2)\nprint\nprint \"pomegranate Gaussian HMM (2 jobs)\"\n%timeit -n 1 -r 1 HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=5, n_jobs=4)",
"All we had to do was pass in the n_jobs parameter to the fit function in order to get a speed improvement. It looks like we're getting a really good speed improvement, as well! This is mostly because the HMM algorithms perform a lot more operations than the other models, and so spend the vast majority of time with the GIL released. You may not notice as strong speedups when using a MultivariateGaussianDistribution because BLAS uses multithreaded operations already internally, even when only one job is specified.\nNow lets look at the prediction function to make sure the we're getting speedups there as well. You'll have to use a wrapper function to parallelize the predictions for a HMM because it returns an annotated sequence rather than a single value like a classic machine learning model might.",
"model = HiddenMarkovModel.from_samples(NormalDistribution, 5, X, max_iterations=2, verbose=False)\n\nprint \"pomegranate Gaussian HMM (1 job)\"\n%timeit predict_proba(model, X)\nprint\nprint \"pomegranate Gaussian HMM (2 jobs)\"\n%timeit predict_proba(model, X, n_jobs=2)",
"Great, we're getting a really good speedup on that as well! Looks like the parallel processing is more efficient with a bigger, more complex model, than with a simple one. This can make sense, because all inference/training is more complex, and so there is more time with the GIL released compared to with the simpler operations.\n4. Mixture of Hidden Markov Models\nLet's stack another layer onto this model by making it a mixture of these hidden Markov models, instead of a single one. At this point we're sticking a multivariate Gaussian HMM into a mixture and we're going to train this big thing in parallel.",
"def create_model(mus):\n n = mus.shape[0]\n \n starts = numpy.zeros(n)\n starts[0] = 1.\n \n ends = numpy.zeros(n)\n ends[-1] = 0.5\n \n transition_matrix = numpy.zeros((n, n))\n distributions = []\n \n for i in range(n):\n transition_matrix[i, i] = 0.5\n \n if i < n - 1:\n transition_matrix[i, i+1] = 0.5\n \n distribution = IndependentComponentsDistribution([NormalDistribution(mu, 1) for mu in mus[i]])\n distributions.append(distribution)\n \n model = HiddenMarkovModel.from_matrix(transition_matrix, distributions, starts, ends)\n return model\n \n\ndef create_mixture(mus):\n hmms = [create_model(mu) for mu in mus]\n return GeneralMixtureModel(hmms)\n\nn, d = 50, 10\nmus = [(numpy.random.randn(d, n)*0.2 + numpy.random.randn(n)*2).T for i in range(2)]\n\nmodel = create_mixture(mus)\nX = numpy.random.randn(400, 150, d)\n\nprint \"pomegranate Mixture of Gaussian HMMs (1 job)\"\n%timeit model.fit(X, max_iterations=5)\nprint\n\nmodel = create_mixture(mus)\nprint \"pomegranate Mixture of Gaussian HMMs (2 jobs)\"\n%timeit model.fit(X, max_iterations=5, n_jobs=2)",
"Looks like we're getting a really nice speed improvement when training this complex model. Let's take a look now at the time it takes to do inference with it.",
"model = create_mixture(mus)\n\nprint \"pomegranate Mixture of Gaussian HMMs (1 job)\"\n%timeit model.predict_proba(X)\nprint\n\nmodel = create_mixture(mus)\nprint \"pomegranate Mixture of Gaussian HMMs (2 jobs)\"\n%timeit model.predict_proba(X, n_jobs=2)",
"We're getting a good speed improvement here too through parallelization.\nConclusions\nHopefully you'll find pomegranate useful in your work! Parallelization should allow you to train complex models faster than before. Keep in mind though that there is an overhead to using parallel processing, and so it's possible that on some smaller examples it does not work as well. In general, the bigger the dataset, the closer to a linear speedup you'll get with pomegranate.\nIf you have any interesting examples of how you've used pomegranate in your work, I'd love to hear about them. In addition I'd like to hear any feedback you may have on features you'd like to see. Please shoot me an email. Good luck!"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arsenovic/galgebra | examples/ipython/inner_product.ipynb | bsd-3-clause | [
"from __future__ import print_function\nfrom sympy import Symbol, symbols, sin, cos, Rational, expand, simplify, collect, S\nfrom galgebra.printer import Eprint, Get_Program, Print_Function, Format\nfrom galgebra.ga import Ga, one, zero\nfrom galgebra.mv import Nga\nFormat()\n\nX = (x, y, z) = symbols('x y z')\no3d = Ga('e_x e_y e_z', g=[1, 1, 1], coords=X)\n(ex, ey, ez) = o3d.mv()\ngrad = o3d.grad\n\nc = o3d.mv('c', 'scalar')\n\na = o3d.mv('a', 'vector')\nb = o3d.mv('b', 'vector')\n\nA = o3d.mv('A','mv')\nB = o3d.mv('B','mv')",
"The inner product of blades in GAlgebra is zero if either operand is a scalar:\n$$\\begin{split}\\begin{aligned}\n {\\boldsymbol{A}}{r}{\\wedge}{\\boldsymbol{B}}{s} &\\equiv {\\left <{{\\boldsymbol{A}}{r}{\\boldsymbol{B}}{s}} \\right >{r+s}} \\\n {\\boldsymbol{A}}{r}\\cdot{\\boldsymbol{B}}{s} &\\equiv {\\left { { \\begin{array}{cc}\n r\\mbox{ and }s \\ne 0: & {\\left <{{\\boldsymbol{A}}{r}{\\boldsymbol{B}}{s}} \\right >{{\\left |{r-s}\\right |}}} \\\n r\\mbox{ or }s = 0: & 0 \\end{array}} \\right }}\n \\end{aligned}\\end{split}$$\nThis definition comes from David Hestenes and Garret Sobczyk, “Clifford Algebra to Geometric Calculus,” Kluwer Academic Publishers, 1984.\nIn some other literature, the inner product is defined without the exceptional case for scalar part and the definition above is known as \"the modified Hestenes inner product\" (this name comes from the source code of GAViewer).",
"c|a\n\na|c\n\nc|A\n\nA|c",
"$ab=a \\wedge b + a \\cdot b$ holds for vectors:",
"a*b\n\na^b\n\na|b\n\n(a*b)-(a^b)-(a|b)",
"$aA=a \\wedge A + a \\cdot A$ holds for the products between vectors and multivectors:",
"a*A\n\na^A\n\na|A\n\n(a*A)-(a^A)-(a|A)",
"$AB=A \\wedge B + A \\cdot B$ does NOT hold for the products between multivectors and multivectors:",
"A*B\n\nA|B\n\n(A*B)-(A^B)-(A|B)\n\n(A<B)+(A|B)+(A>B)-A*B"
] | [
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jbliss1234/ML | t81_558_class4_class_reg.ipynb | apache-2.0 | [
"T81-558: Applications of Deep Neural Networks\nClass 4: Classification and Regression\n* Instructor: Jeff Heaton, School of Engineering and Applied Science, Washington University in St. Louis\n* For more information visit the class website.\nBinary Classification, Classification and Regression\n\nBinary Classification - Classification between two possibilities (positive and negative). Common in medical testing, does the person have the disease (positive) or not (negative).\nClassification - Classification between more than 2. The iris dataset (3-way classification).\nRegression - Numeric prediction. How many MPG does a car get?\n\nIn this class session we will look at some visualizations for all three.\nFeature Vector Encoding\nThese are exactly the same feature vector encoding functions from Class 3. They must be defined for this class as well. For more information, refer to class 3.",
"from sklearn import preprocessing\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\n\n# Encode text values to dummy variables(i.e. [1,0,0],[0,1,0],[0,0,1] for red,green,blue)\ndef encode_text_dummy(df,name):\n dummies = pd.get_dummies(df[name])\n for x in dummies.columns:\n dummy_name = \"{}-{}\".format(name,x)\n df[dummy_name] = dummies[x]\n df.drop(name, axis=1, inplace=True)\n\n# Encode text values to indexes(i.e. [1],[2],[3] for red,green,blue).\ndef encode_text_index(df,name):\n le = preprocessing.LabelEncoder()\n df[name] = le.fit_transform(df[name])\n return le.classes_\n\n# Encode a numeric column as zscores\ndef encode_numeric_zscore(df,name,mean=None,sd=None):\n if mean is None:\n mean = df[name].mean()\n\n if sd is None:\n sd = df[name].std()\n\n df[name] = (df[name]-mean)/sd\n\n# Convert all missing values in the specified column to the median\ndef missing_median(df, name):\n med = df[name].median()\n df[name] = df[name].fillna(med)\n\n# Convert a Pandas dataframe to the x,y inputs that TensorFlow needs\ndef to_xy(df,target):\n result = []\n for x in df.columns:\n if x != target:\n result.append(x)\n\n # find out the type of the target column. Is it really this hard? :(\n target_type = df[target].dtypes\n target_type = target_type[0] if hasattr(target_type, '__iter__') else target_type\n \n # Encode to int for classification, float otherwise. TensorFlow likes 32 bits.\n if target_type in (np.int64, np.int32):\n # Classification\n return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.int32)\n else:\n # Regression\n return df.as_matrix(result).astype(np.float32),df.as_matrix([target]).astype(np.float32)\n \n# Nicely formatted time string\ndef hms_string(sec_elapsed):\n h = int(sec_elapsed / (60 * 60))\n m = int((sec_elapsed % (60 * 60)) / 60)\n s = sec_elapsed % 60\n return \"{}:{:>02}:{:>05.2f}\".format(h, m, s)",
"Toolkit: Visualization Functions\nThis class will introduce 3 different visualizations that can be used with the two different classification type neural networks and regression neural networks.\n\nConfusion Matrix - For any type of classification neural network.\nROC Curve - For binary classification.\nLift Curve - For regression neural networks.\n\nThe code used to produce these visualizations is shown here:",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nfrom sklearn.metrics import roc_curve, auc\n\n# Plot a confusion matrix.\n# cm is the confusion matrix, names are the names of the classes.\ndef plot_confusion_matrix(cm, names, title='Confusion matrix', cmap=plt.cm.Blues):\n plt.imshow(cm, interpolation='nearest', cmap=cmap)\n plt.title(title)\n plt.colorbar()\n tick_marks = np.arange(len(names))\n plt.xticks(tick_marks, names, rotation=45)\n plt.yticks(tick_marks, names)\n plt.tight_layout()\n plt.ylabel('True label')\n plt.xlabel('Predicted label')\n \n\n# Plot an ROC. pred - the predictions, y - the expected output.\ndef plot_roc(pred,y):\n fpr, tpr, _ = roc_curve(y_test, pred)\n roc_auc = auc(fpr, tpr)\n\n plt.figure()\n plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)\n plt.plot([0, 1], [0, 1], 'k--')\n plt.xlim([0.0, 1.0])\n plt.ylim([0.0, 1.05])\n plt.xlabel('False Positive Rate')\n plt.ylabel('True Positive Rate')\n plt.title('Receiver Operating Characteristic (ROC)')\n plt.legend(loc=\"lower right\")\n plt.show()\n \n# Plot a lift curve. pred - the predictions, y - the expected output.\ndef chart_regression(pred,y):\n t = pd.DataFrame({'pred' : pred.flatten(), 'y' : y_test.flatten()})\n t.sort_values(by=['y'],inplace=True)\n\n a = plt.plot(t['y'].tolist(),label='expected')\n b = plt.plot(t['pred'].tolist(),label='prediction')\n plt.ylabel('output')\n plt.legend()\n plt.show()",
"Binary Classification\nBinary classification is used to create a model that classifies between only two classes. These two classes are often called \"positive\" and \"negative\". Consider the following program that uses the wcbreast_wdbc dataset to classify if a breast tumor is cancerous (malignant) or not (benign). The iris dataset is not binary, because there are three classes (3 types of iris).",
"import os\nimport pandas as pd\nfrom sklearn.cross_validation import train_test_split\nimport tensorflow.contrib.learn as skflow\nimport numpy as np\nfrom sklearn import metrics\n\npath = \"./data/\"\n \nfilename = os.path.join(path,\"wcbreast_wdbc.csv\") \ndf = pd.read_csv(filename,na_values=['NA','?'])\n\n# Encode feature vector\ndf.drop('id',axis=1,inplace=True)\nencode_numeric_zscore(df,'mean_radius')\nencode_text_index(df,'mean_texture') \nencode_text_index(df,'mean_perimeter')\nencode_text_index(df,'mean_area')\nencode_text_index(df,'mean_smoothness')\nencode_text_index(df,'mean_compactness')\nencode_text_index(df,'mean_concavity')\nencode_text_index(df,'mean_concave_points')\nencode_text_index(df,'mean_symmetry')\nencode_text_index(df,'mean_fractal_dimension')\nencode_text_index(df,'se_radius')\nencode_text_index(df,'se_texture')\nencode_text_index(df,'se_perimeter')\nencode_text_index(df,'se_area')\nencode_text_index(df,'se_smoothness')\nencode_text_index(df,'se_compactness')\nencode_text_index(df,'se_concavity')\nencode_text_index(df,'se_concave_points')\nencode_text_index(df,'se_symmetry')\nencode_text_index(df,'se_fractal_dimension')\nencode_text_index(df,'worst_radius')\nencode_text_index(df,'worst_texture')\nencode_text_index(df,'worst_perimeter')\nencode_text_index(df,'worst_area')\nencode_text_index(df,'worst_smoothness')\nencode_text_index(df,'worst_compactness')\nencode_text_index(df,'worst_concavity')\nencode_text_index(df,'worst_concave_points')\nencode_text_index(df,'worst_symmetry')\nencode_text_index(df,'worst_fractal_dimension')\ndiagnosis = encode_text_index(df,'diagnosis')\nnum_classes = len(diagnosis)\n\n# Create x & y for training\n\n# Create the x-side (feature vectors) of the training\nx, y = to_xy(df,'diagnosis')\n \n# Split into train/test\nx_train, x_test, y_train, y_test = train_test_split( \n x, y, test_size=0.25, random_state=42) \n \n# Create a deep neural network with 3 hidden layers of 10, 20, 10\nclassifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=num_classes,\n steps=10000)\n\n# Early stopping\nearly_stop = skflow.monitors.ValidationMonitor(x_test, y_test,\n early_stopping_rounds=200, print_steps=50, n_classes=num_classes)\n \n# Fit/train neural network\nclassifier.fit(x_train, y_train, early_stop)\n\n# Measure accuracy\nscore = metrics.accuracy_score(y, classifier.predict(x))\nprint(\"Final accuracy: {}\".format(score))\n",
"Confusion Matrix\nThe confusion matrix is a common visualization for both binary and larger classification problems. Often a model will have difficulty differentiating between two classes. For example, a neural network might be really good at telling the difference between cats and dogs, but not so good at telling the difference between dogs and wolves. The following code generates a confusion matrix:",
"import numpy as np\n\nfrom sklearn import svm, datasets\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import confusion_matrix\n\npred = classifier.predict(x_test)\n \n# Compute confusion matrix\ncm = confusion_matrix(y_test, pred)\nnp.set_printoptions(precision=2)\nprint('Confusion matrix, without normalization')\nprint(cm)\nplt.figure()\nplot_confusion_matrix(cm, diagnosis)\n\n# Normalize the confusion matrix by row (i.e by the number of samples\n# in each class)\ncm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\nprint('Normalized confusion matrix')\nprint(cm_normalized)\nplt.figure()\nplot_confusion_matrix(cm_normalized, diagnosis, title='Normalized confusion matrix')\n\nplt.show()",
"The above two confusion matrixes show the same network. The bottom (normalized) is the type you will normally see. Notice the two labels. The label \"B\" means benign (no cancer) and the label \"M\" means malignant (cancer). The left-right (x) axis are the predictions, the top-bottom) are the expected outcomes. A perfect model (that never makes an error) has a dark blue diagonal that runs from top-left to bottom-right. \nTo read, consider the top-left square. This square indicates \"true labeled\" of B and also \"predicted label\" of B. This is good! The prediction matched the truth. The blueness of this box represents how often \"B\" is classified correct. It is not darkest blue. This is because the square to the right(which is off the perfect diagonal) has some color. This square indicates truth of \"B\" but prediction of \"M\". The white square, at the bottom-left, indicates a true of \"M\" but predicted of \"B\". The whiteness indicates this rarely happens. \nYour conclusion from the above chart is that the model sometimes classifies \"B\" as \"M\" (a false negative), but never mis-classifis \"M\" as \"B\". Always look for the dark diagonal, this is good!\nROC Curves\nROC curves can be a bit confusing. However, they are very common. It is important to know how to read them. Even their name is confusing. Do not worry about their name, it comes from electrical engineering (EE).\nBinary classification is common in medical testing. Often you want to diagnose if someone has a disease. This can lead to two types of errors, know as false positives and false negatives:\n\nFalse Positive - Your test (neural network) indicated that the patient had the disease; however, the patient did not have the disease.\nFalse Negative - Your test (neural network) indicated that the patient did not have the disease; however, the patient did have the disease.\nTrue Positive - Your test (neural network) correctly identified that the patient had the disease.\nTrue Negative - Your test (neural network) correctly identified that the patient did not have the disease.\n\nTypes of errors:\n\nNeural networks classify in terms of probbility of it being positive. However, at what probability do you give a positive result? Is the cutoff 50%? 90%? Where you set this cutoff is called the threshold. Anything above the cutoff is positive, anything below is negative. Setting this cutoff allows the model to be more sensative or specific:\n\nThe following shows a more sensitive cutoff:\n\nAn ROC curve measures how good a model is regardless of the cutoff. The following shows how to read a ROC chart:\n\nThe following code shows an ROC chart for the breast cancer neural network. The area under the curve (AUC) is also an important measure. The larger the AUC, the better.",
"pred = classifier.predict_proba(x_test)\npred = pred[:,1] # Only positive cases\n# print(pred[:,1])\nplot_roc(pred,y_test)\n",
"Classification\nWe've already seen multi-class classification, with the iris dataset. Confusion matrixes work just fine with 3 classes. The following code generates a confusion matrix for iris.",
"import os\nimport pandas as pd\nfrom sklearn.cross_validation import train_test_split\nimport tensorflow.contrib.learn as skflow\nimport numpy as np\n\npath = \"./data/\"\n \nfilename = os.path.join(path,\"iris.csv\") \ndf = pd.read_csv(filename,na_values=['NA','?'])\n\n# Encode feature vector\nencode_numeric_zscore(df,'petal_w')\nencode_numeric_zscore(df,'petal_l')\nencode_numeric_zscore(df,'sepal_w')\nencode_numeric_zscore(df,'sepal_l')\nspecies = encode_text_index(df,\"species\")\nnum_classes = len(species)\n\n# Create x & y for training\n\n# Create the x-side (feature vectors) of the training\nx, y = to_xy(df,'species')\n \n# Split into train/test\nx_train, x_test, y_train, y_test = train_test_split( \n x, y, test_size=0.25, random_state=45) \n # as much as I would like to use 42, it gives a perfect result, and a boring confusion matrix!\n \n# Create a deep neural network with 3 hidden layers of 10, 20, 10\nclassifier = skflow.TensorFlowDNNClassifier(hidden_units=[10, 20, 10], n_classes=num_classes,\n steps=10000)\n\n# Early stopping\nearly_stop = skflow.monitors.ValidationMonitor(x_test, y_test,\n early_stopping_rounds=200, print_steps=50, n_classes=num_classes)\n \n# Fit/train neural network\nclassifier.fit(x_train, y_train, early_stop)\n\n\nimport numpy as np\n\nfrom sklearn import svm, datasets\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.metrics import confusion_matrix\n\n\n\npred = classifier.predict(x_test)\n \n# Compute confusion matrix\ncm = confusion_matrix(y_test, pred)\nnp.set_printoptions(precision=2)\nprint('Confusion matrix, without normalization')\nprint(cm)\nplt.figure()\nplot_confusion_matrix(cm, species)\n\n# Normalize the confusion matrix by row (i.e by the number of samples\n# in each class)\ncm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]\nprint('Normalized confusion matrix')\nprint(cm_normalized)\nplt.figure()\nplot_confusion_matrix(cm_normalized, species, title='Normalized confusion matrix')\n\nplt.show()",
"See the strong diagonal? Iris is easy. See the light blue near the bottom? Sometimes virginica is confused for versicolor.\nRegression\nWe've already seen regression with the MPG dataset. Regression uses its own set of visualizations, one of the most common is the lift chart. The following code generates a lift chart.",
"import tensorflow.contrib.learn as skflow\nimport pandas as pd\nimport os\nimport numpy as np\nfrom sklearn import metrics\nfrom scipy.stats import zscore\n\npath = \"./data/\"\n\nfilename_read = os.path.join(path,\"auto-mpg.csv\")\ndf = pd.read_csv(filename_read,na_values=['NA','?'])\n\n# create feature vector\nmissing_median(df, 'horsepower')\ndf.drop('name',1,inplace=True)\nencode_numeric_zscore(df, 'horsepower')\nencode_numeric_zscore(df, 'weight')\nencode_numeric_zscore(df, 'cylinders')\nencode_numeric_zscore(df, 'displacement')\nencode_numeric_zscore(df, 'acceleration')\nencode_text_dummy(df, 'origin')\n\n# Encode to a 2D matrix for training\nx,y = to_xy(df,['mpg'])\n\n# Split into train/test\nx_train, x_test, y_train, y_test = train_test_split(\n x, y, test_size=0.25, random_state=42)\n\n# Create a deep neural network with 3 hidden layers of 50, 25, 10\nregressor = skflow.TensorFlowDNNRegressor(hidden_units=[50, 25, 10], steps=5000)\n\n# Early stopping\nearly_stop = skflow.monitors.ValidationMonitor(x_test, y_test,\n early_stopping_rounds=200, print_steps=50)\n\n# Fit/train neural network\nregressor.fit(x_train, y_train, early_stop)\n\npred = regressor.predict(x_test)\n\nchart_regression(pred,y_test)",
"To generate a lift chart, perform the following activities:\n\nSort the data by expected output. Plot the blue line above.\nFor every point on the x-axis plot the predicted value for that same data point. This is the green line above.\nThe x-axis is just 0 to 100% of the dataset. The expected always starts low and ends high.\nThe y-axis is ranged according to the values predicted.\n\nReading a lift chart:\n* The expected and predict lines should be close. Notice where one is above the ot other.\n* The above chart is the most accurate on lower MPG."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
agmarrugo/sensors-actuators | notebooks/Ex_2_3.ipynb | mit | [
"The transfer function\nAnalytic form of transfer function. In certain cases the transfer function is available as an analytic expression. One common transfer function used for resistance temperature sensors (to be discussed in Chapter 3) is the Callendar– Van Duzen equation. It gives the resistance of the sensor at a temperature T as\n$$R(T)=R_{0}(1+AT+BT^2+C(T-100)T^3) \\enspace,$$\nwhere the constants A, B, and C are determined by direct measurement of resistance for the specific material used in the sensor and $R_0$ is the temperature of the sensor at 0 ºC. Typical temperatures used for calibration are the oxygen point (-182.962 ºC; the equilibrium between liquid oxygen and its vapor), the triple point of water (0.01 ºC; the point of equilibrium temperature between ice, liquid water, and water vapor), the steam point (100 ºC; the equilibrium point between water and vapor), the zinc point (419.58 ºC; the equilibrium point between solid and liquid zinc), the silver point (961.93 ºC), and the gold point (1064.43 ºC), as well as others. Consider a platinum resistance sensor with a nominal resistance of 25 $\\Omega $ at 0 C. To calibrate the sensor its resistance is measured at the oxygen point as 6.2 $\\Omega $, at the steam point as 35.6 $\\Omega $, and at the zinc point as 66.1 $\\Omega $. Calculate the coefficients A, B, and C and plot the transfer function between -200 ºC and 600 ºC.\nSolution\nIn order to obtain the sensor calibration, several measurements at different temperatures where taken:\n\n\n6.2 $\\Omega $ at a temperature of -182.962 ºC (oxygen point).\n\n\n35.6 $\\Omega $ at a temperature of 100 ºC (steam point).\n\n\n66.1 $\\Omega $ at a temperature of 419.58 ºC (zinc point)\n\n\nLet's plot the points,",
"import matplotlib.pyplot as plt\nimport numpy as np\nfrom math import log, exp\n%matplotlib inline\nfrom scipy.interpolate import InterpolatedUnivariateSpline\n\n\nT_exp = np.array([-182.962,100,419.58]);# Celcius\nR_exp = np.array([6.2 ,35.6,66.1])# Ohm\nplt.plot(T_exp,R_exp,'*');\nplt.ylabel('Resistance of the sensor [Ohm]')\nplt.xlabel('Temperature [C]')\nplt.show()",
"Reordering the Callendar-Van Duzen equation we obtain the following\n$$ AT+BT^2+C(T-100)T^3 =\\frac{R(T)}{R_0}-1 \\enspace,$$\nwhich we can write in matrix form as $Mx=p$, where\n$$\\begin{bmatrix} T_1 & T_1^2 & (T_1-100)T_1^3 \\ T_2 & T_2^2 & (T_2-100)T_2^3 \\ T_3 & T_3^2 & (T_3-100)T_3^3\\end{bmatrix} \\begin{bmatrix} A\\ B \\ C\\end{bmatrix} = \\begin{bmatrix} \\frac{R(T_1)}{R_0}-1 \\ \\frac{R(T_2)}{R_0}-1 \\ \\frac{R(T_3)}{R_0}-1\\end{bmatrix} \\enspace.$$\nBecause $M$ is square we can solve by computing $M^{-1}$ directly.",
"R0=25;\nM=np.array([[T_exp[0],(T_exp[0])**2,(T_exp[0]-100)*(T_exp[0])**3],[T_exp[1],(T_exp[1])**2,(T_exp[1]-100)*(T_exp[1])**3],[T_exp[2],(T_exp[2])**2,(T_exp[2]-100)*(T_exp[2])**3]]);\np=np.array([[(R_exp[0]/R0)-1],[(R_exp[1]/R0)-1],[(R_exp[2]/R0)-1]]);\nx = np.linalg.solve(M,p) #solve linear equations system\n\nnp.set_printoptions(precision=3)\n\nprint('M')\nprint(M)\nprint('\\n')\nprint('p')\nprint(p)\nprint('\\n')\nprint('x')\nprint(x)",
"We have found the coeffiecients $A$, $B$, and $C$ necessary to describe the sensor's transfer function. Now we plot it from -200 C a 600 C.",
"A=x[0];B=x[1];C=x[2];\nT_range= np.arange(start = -200, stop = 601, step = 1);\nR_funT= R0*(1+A[0]*T_range+B[0]*(T_range)**2+C[0]*(T_range-100)*(T_range)**3);\nplt.plot(T_range,R_funT,T_exp[0],R_exp[0],'ro',T_exp[1],R_exp[1],'ro',T_exp[2],R_exp[2],'ro');\nplt.ylabel('Sensor resistance [Ohm]')\nplt.xlabel('Temperature [C]')\nplt.show()\n",
"We see the fit is accurate. Note that our approach is also valid if we have more experimental points, in which case the system of equations $Mx=p$ is solved in the Least-Squares sense.\n\nThis page was written in the IPython Jupyter Notebook. To download the notebook click on this option at the top menu or get it from the github repo."
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rayjustinhuang/DataAnalysisandMachineLearning | Linear Programming with OR-Tools.ipynb | mit | [
"Linear Programming with OR-Tools\nIn this notebook, we do some basic LP solving with Google's OR-Tools. Problems used will be examples in Hamdy Taha's Operations Research: An Introduction, 9th Edition, which I have in paperback.",
"from ortools.linear_solver import pywraplp",
"Reddy Mikks model\nGiven the following variables:\n$\\begin{aligned}\nx_1 = \\textrm{Tons of exterior paint produced daily} \\newline\nx_2 = \\textrm{Tons of interior paint produced daily}\n\\end{aligned}$\nand knowing that we want to maximize the profit, where \\$5000 is the profit from exterior paint and \\$4000 is the profit from a ton of interior paint, the Reddy Mikks model is:\n$$\\textrm{Maximize } z = 5x_1 + 4x_2$$\nsubject to\n$$6x_1 + 4x_2 \\le 24$$\n$$x_1 + 2x_2 \\le 6$$\n$$-x_1 + x_2 \\le 1$$\n$$x_2 \\le 2$$\n$$x_1, x_2 \\ge 0$$",
"reddymikks = pywraplp.Solver('Reddy_Mikks', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)\n\nx1 = reddymikks.NumVar(0, reddymikks.infinity(), 'x1')\nx2 = reddymikks.NumVar(0, reddymikks.infinity(), 'x2')\n\nreddymikks.Add(6*x1 + 4*x2 <= 24)\nreddymikks.Add(x1 + 2*x2 <= 6)\nreddymikks.Add(-x1 + x2 <= 1)\nreddymikks.Add(x2 <= 2)\n\nprofit = reddymikks.Objective()\nprofit.SetCoefficient(x1, 5)\nprofit.SetCoefficient(x2, 4)\nprofit.SetMaximization()\n\nstatus = reddymikks.Solve()\n\nif status not in [reddymikks.OPTIMAL, reddymikks.FEASIBLE]:\n raise Exception('No feasible solution found')\n \nprint(\"The company should produce\",round(x1.solution_value(),2),\"tons of exterior paint\")\nprint(\"The company should produce\",round(x2.solution_value(),2),\"tons of interior paint\")\nprint(\"The optimal profit is\", profit.Value(), 'thousand USD')",
"More simple problems\nA company that operates 10 hours a day manufactures two products on three sequential processes. The following data characterizes the problem:",
"import pandas as pd\n\nproblemdata = pd.DataFrame({'Process 1': [10, 5], 'Process 2':[6, 20], 'Process 3':[8, 10], 'Unit profit':[20, 30]})\nproblemdata.index = ['Product 1', 'Product 2']\n\nproblemdata",
"Where there are 10 hours a day dedicated to production. Process times are given in minutes per unit while profit is given in USD.\nThe optimal mix of the two products would be characterized by the following model:\n$\\begin{aligned}\nx_1 = \\textrm{Units of product 1} \\newline\nx_2 = \\textrm{Units of product 2}\n\\end{aligned}$\n$$\\textrm{Maximize } z = 20x_1 + 30x_2$$\nsubject to\n$$\\begin{array}{rcl}\n10x_1 + 5x_2 \\le 600 \\newline\n6x_1 + 20x_2 \\le 600 \\newline\n8x_1 + 10x_2 \\le 600 \\newline\nx_1, x_2 \\ge 0\n\\end{array}$$\n(we will assume that continuous solution values are acceptable for this problem)",
"simpleprod = pywraplp.Solver('Simple_Production', pywraplp.Solver.GLOP_LINEAR_PROGRAMMING)\n\nx1 = simpleprod.NumVar(0, simpleprod.infinity(), 'x1')\nx2 = simpleprod.NumVar(0, simpleprod.infinity(), 'x2')\n\nfor i in problemdata.columns[:-1]:\n simpleprod.Add(problemdata.loc[problemdata.index[0], i]*x1 + problemdata.loc[problemdata.index[1], i]*x2 <= 600)\n\nprofit = simpleprod.Objective()\nprofit.SetCoefficient(x1, 20)\nprofit.SetCoefficient(x2, 30)\nprofit.SetMaximization()\n\nstatus = simpleprod.Solve()\n\nif status not in [simpleprod.OPTIMAL, simpleprod.FEASIBLE]:\n raise Exception('No feasible solution found')\n \nprint(\"The company should produce\",round(x1.solution_value(),2),\"units of product 1\")\nprint(\"The company should produce\",round(x2.solution_value(),2),\"units of product 2\")\nprint(\"The optimal profit is\", round(profit.Value(),2), 'USD')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
RaRe-Technologies/gensim | docs/notebooks/nmslibtutorial.ipynb | lgpl-2.1 | [
"Similarity Queries using Nmslib Tutorial\nThis tutorial is about using the (Non-Metric Space Library (NMSLIB)) library for similarity queries with a Word2Vec model built with gensim.\nWhy use Nmslib?\nThe current implementation for finding k nearest neighbors in a vector space in gensim has linear complexity via brute force in the number of indexed documents, although with extremely low constant factors. The retrieved results are exact, which is an overkill in many applications: approximate results retrieved in sub-linear time may be enough. Nmslib can find approximate nearest neighbors much faster.\nCompared to annoy, nmslib has more parameteres to control the build and query time and accuracy. Nmslib can achieve faster and more accurate nearest neighbors search than annoy. This figure shows a comparison between annoy and nmslib indexer with differents parameters. This shows nmslib is better than annoy.\n\nPrerequisites\nAdditional libraries needed for this tutorial:\n- nmslib\n- annoy\n- psutil\n- matplotlib\nOutline\n\nDownload Text8 Corpus\nBuild Word2Vec Model\nConstruct NmslibIndex with model & make a similarity query\nVerify & Evaluate performance\nEvaluate relationship of parameters to initialization/query time and accuracy, compared with annoy\nWork with Google's word2vec C formats",
"# pip install watermark\n%reload_ext watermark\n%watermark -v -m -p gensim,numpy,scipy,psutil,matplotlib",
"1. Download Text8 Corpus",
"import os.path\nif not os.path.isfile('text8'):\n !wget -c http://mattmahoney.net/dc/text8.zip\n !unzip text8.zip",
"Import & Set up Logging\nI'm not going to set up logging due to the verbose input displaying in notebooks, but if you want that, uncomment the lines in the cell below.",
"LOGS = False\n\nif LOGS:\n import logging\n logging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)",
"2. Build Word2Vec Model",
"from gensim.models import Word2Vec, KeyedVectors\nfrom gensim.models.word2vec import Text8Corpus\n\n# Using params from Word2Vec_FastText_Comparison\n\nparams = {\n 'alpha': 0.05,\n 'size': 100,\n 'window': 5,\n 'iter': 5,\n 'min_count': 5,\n 'sample': 1e-4,\n 'sg': 1,\n 'hs': 0,\n 'negative': 5\n}\n\nmodel = Word2Vec(Text8Corpus('text8'), **params)\nprint(model)",
"See the Word2Vec tutorial for how to initialize and save this model.\nComparing the traditional implementation, Annoy and Nmslib approximation",
"# Set up the model and vector that we are using in the comparison\nfrom gensim.similarities.index import AnnoyIndexer\nfrom gensim.similarities.nmslib import NmslibIndexer\n\nmodel.init_sims()\nannoy_index = AnnoyIndexer(model, 300)\nnmslib_index = NmslibIndexer(model, {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}, {'efSearch': 10})\n\n# Dry run to make sure both indices are fully in RAM\nvector = model.wv.syn0norm[0]\nprint(model.most_similar([vector], topn=5, indexer=annoy_index))\nprint(model.most_similar([vector], topn=5, indexer=nmslib_index))\nprint(model.most_similar([vector], topn=5))\n\nimport time\nimport numpy as np\n\ndef avg_query_time(annoy_index=None, queries=1000):\n \"\"\"\n Average query time of a most_similar method over 1000 random queries,\n uses annoy if given an indexer\n \"\"\"\n total_time = 0\n for _ in range(queries):\n rand_vec = model.wv.syn0norm[np.random.randint(0, len(model.wv.vocab))]\n start_time = time.clock()\n model.most_similar([rand_vec], topn=5, indexer=annoy_index)\n total_time += time.clock() - start_time\n return total_time / queries\n\nqueries = 10000\n\ngensim_time = avg_query_time(queries=queries)\nannoy_time = avg_query_time(annoy_index, queries=queries)\nnmslib_time = avg_query_time(nmslib_index, queries=queries)\nprint(\"Gensim (s/query):\\t{0:.5f}\".format(gensim_time))\nprint(\"Annoy (s/query):\\t{0:.5f}\".format(annoy_time))\nprint(\"Nmslib (s/query):\\t{0:.5f}\".format(nmslib_time))\nspeed_improvement_gensim = gensim_time / nmslib_time\nspeed_improvement_annoy = annoy_time / nmslib_time\nprint (\"\\nNmslib is {0:.2f} times faster on average on this particular run\".format(speed_improvement_gensim))\nprint (\"\\nNmslib is {0:.2f} times faster on average than annoy on this particular run\".format(speed_improvement_annoy))\n",
"3. Construct Nmslib Index with model & make a similarity query\nCreating an indexer\nAn instance of NmslibIndexer needs to be created in order to use Nmslib in gensim. The NmslibIndexer class is located in gensim.similarities.nmslib\nNmslibIndexer() takes three parameters:\nmodel: A Word2Vec or Doc2Vec model\nindex_params: Parameters for building nmslib indexer. index_params effects the build time and the index size. The example is {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}. Increasing the value of M and efConstruction improves the accuracy of search. However this also leads to longer indexing times. indexThreadQty is the number of thread. \nquery_time_params: Parameters for querying on nmslib indexer. query_time_params effects the query time and the search accuracy. The example is {'efSearch': 100}. A larger efSearch will give more accurate results, but larger query time. \nMore information can be found here. The relationship between parameters, build/query time, and accuracy will be investigated later in the tutorial. \nNow that we are ready to make a query, lets find the top 5 most similar words to \"science\" in the Text8 corpus. To make a similarity query we call Word2Vec.most_similar like we would traditionally, but with an added parameter, indexer. The only supported indexerers in gensim as of now are Annoy and Nmslib.",
"# Building nmslib indexer\nnmslib_index = NmslibIndexer(model, {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}, {'efSearch': 10})\n# Derive the vector for the word \"science\" in our model\nvector = model[\"science\"]\n# The instance of AnnoyIndexer we just created is passed \napproximate_neighbors = model.most_similar([vector], topn=11, indexer=nmslib_index)\n\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = model.most_similar([vector], topn=11)\nprint(\"\\nNormal (not nmslib-indexed) Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)",
"Analyzing the results\nThe closer the cosine similarity of a vector is to 1, the more similar that word is to our query, which was the vector for \"science\". In this case the results are almostly same.\n4. Verify & Evaluate performance\nPersisting Indexes\nYou can save and load your indexes from/to disk to prevent having to construct them each time. This will create two files on disk, fname and fname.d. Both files are needed to correctly restore all attributes.",
"import os\n\nfname = '/tmp/mymodel.index'\n\n# Persist index to disk\nnmslib_index.save(fname)\n\n# Load index back\nif os.path.exists(fname):\n nmslib_index2 = NmslibIndexer.load(fname)\n nmslib_index2.model = model\n\n# Results should be identical to above\nvector = model[\"science\"]\napproximate_neighbors2 = model.most_similar([vector], topn=11, indexer=nmslib_index2)\nfor neighbor in approximate_neighbors2:\n print(neighbor)\n \nassert approximate_neighbors == approximate_neighbors2",
"Be sure to use the same model at load that was used originally, otherwise you will get unexpected behaviors.\nSave memory by memory-mapping indices saved to disk\nNmslib library has a useful feature that indices can be memory-mapped from disk. It saves memory when the same index is used by several processes.\nBelow are two snippets of code. First one has a separate index for each process. The second snipped shares the index between two processes via memory-mapping. The second example uses less total RAM as it is shared.",
"# Remove verbosity from code below (if logging active)\n\nif LOGS:\n logging.disable(logging.CRITICAL)\n\nfrom multiprocessing import Process\nimport psutil",
"Bad Example: Two processes load the Word2vec model from disk and create there own Nmslib indices from that model.",
"%%time\n\nmodel.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model[\"science\"]\n nmslib_index = NmslibIndexer(new_model, {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}, {'efSearch': 10})\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=nmslib_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()",
"Good example. Two processes load both the Word2vec model and index from disk and memory-map the index",
"%%time\n\nmodel.save('/tmp/mymodel.pkl')\n\ndef f(process_id):\n print('Process Id: {}'.format(os.getpid()))\n process = psutil.Process(os.getpid())\n new_model = Word2Vec.load('/tmp/mymodel.pkl')\n vector = new_model[\"science\"]\n nmslib_index = NmslibIndexer.load('/tmp/mymodel.index')\n nmslib_index.model = new_model\n approximate_neighbors = new_model.most_similar([vector], topn=5, indexer=nmslib_index)\n print('\\nMemory used by process {}: {}\\n---'.format(os.getpid(), process.memory_info()))\n\n# Creating and running two parallel process to share the same index file.\np1 = Process(target=f, args=('1',))\np1.start()\np1.join()\np2 = Process(target=f, args=('2',))\np2.start()\np2.join()",
"5. Evaluate relationship of parameters to initialization/query time and accuracy, compared with annoy",
"import matplotlib.pyplot as plt\n%matplotlib inline",
"Build dataset of Initialization times and accuracy measures",
"exact_results = [element[0] for element in model.most_similar([model.wv.syn0norm[0]], topn=100)]\n\n# For calculating query time\nqueries = 1000\n\ndef create_evaluation_graph(x_values, y_values_init, y_values_accuracy, y_values_query, param_name):\n plt.figure(1, figsize=(12, 6))\n plt.subplot(231)\n plt.plot(x_values, y_values_init)\n plt.title(\"{} vs initalization time\".format(param_name))\n plt.ylabel(\"Initialization time (s)\")\n plt.xlabel(param_name)\n plt.subplot(232)\n plt.plot(x_values, y_values_accuracy)\n plt.title(\"{} vs accuracy\".format(param_name))\n plt.ylabel(\"% accuracy\")\n plt.xlabel(param_name)\n plt.tight_layout()\n plt.subplot(233)\n plt.plot(y_values_init, y_values_accuracy)\n plt.title(\"Initialization time vs accuracy\")\n plt.ylabel(\"% accuracy\")\n plt.xlabel(\"Initialization time (s)\")\n plt.tight_layout()\n plt.subplot(234)\n plt.plot(x_values, y_values_query)\n plt.title(\"{} vs query time\".format(param_name))\n plt.ylabel(\"query time\")\n plt.xlabel(param_name)\n plt.tight_layout()\n plt.subplot(235)\n plt.plot(y_values_query, y_values_accuracy)\n plt.title(\"query time vs accuracy\")\n plt.ylabel(\"% accuracy\")\n plt.xlabel(\"query time (s)\")\n plt.tight_layout()\n plt.show()\n\ndef evaluate_nmslib_performance(parameter, is_parameter_query, parameter_start, parameter_end, parameter_step):\n nmslib_x_values = []\n nmslib_y_values_init = []\n nmslib_y_values_accuracy = []\n nmslib_y_values_query = []\n index_params = {'M': 100, 'indexThreadQty': 10, 'efConstruction': 100, 'post': 0}\n query_params = {'efSearch': 100}\n \n for x in range(parameter_start, parameter_end, parameter_step):\n nmslib_x_values.append(x)\n start_time = time.time()\n if is_parameter_query:\n query_params[parameter] = x\n else:\n index_params[parameter] = x\n nmslib_index = NmslibIndexer(model\n , index_params\n , query_params)\n nmslib_y_values_init.append(time.time() - start_time)\n approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=nmslib_index)\n top_words = [result[0] for result in approximate_results]\n nmslib_y_values_accuracy.append(len(set(top_words).intersection(exact_results)))\n nmslib_y_values_query.append(avg_query_time(nmslib_index, queries=queries))\n create_evaluation_graph(nmslib_x_values,\n nmslib_y_values_init, \n nmslib_y_values_accuracy, \n nmslib_y_values_query, \n parameter)\n\n# Evaluate nmslib indexer, changing the parameter M\nevaluate_nmslib_performance(\"M\", False, 50, 401, 50)\n\n# Evaluate nmslib indexer, changing the parameter efConstruction\nevaluate_nmslib_performance(\"efConstruction\", False, 50, 1001, 100)\n\n# Evaluate nmslib indexer, changing the parameter efSearch\nevaluate_nmslib_performance(\"efSearch\", True, 50, 401, 100)\n\n# Evaluate annoy indexer, changing the parameter num_tree\nannoy_x_values = []\nannoy_y_values_init = []\nannoy_y_values_accuracy = []\nannoy_y_values_query = []\n\nfor x in range(100, 401, 50):\n annoy_x_values.append(x)\n start_time = time.time()\n annoy_index = AnnoyIndexer(model, x)\n annoy_y_values_init.append(time.time() - start_time)\n approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=annoy_index)\n top_words = [result[0] for result in approximate_results]\n annoy_y_values_accuracy.append(len(set(top_words).intersection(exact_results)))\n annoy_y_values_query.append(avg_query_time(annoy_index, queries=queries))\ncreate_evaluation_graph(annoy_x_values,\n annoy_y_values_init, \n annoy_y_values_accuracy, \n annoy_y_values_query, \n \"num_tree\")\n\n# nmslib indexer changing the parameter M, efConstruction, efSearch\nnmslib_y_values_init = []\nnmslib_y_values_accuracy = []\nnmslib_y_values_query = []\n\nfor M in [100, 200]:\n for efConstruction in [100, 200]:\n for efSearch in [100, 200]:\n start_time = time.time()\n nmslib_index = NmslibIndexer(model, \n {'M': M, 'indexThreadQty': 10, 'efConstruction': efConstruction, 'post': 0},\n {'efSearch': efSearch})\n nmslib_y_values_init.append(time.time() - start_time)\n approximate_results = model.most_similar([model.wv.syn0norm[0]], topn=100, indexer=nmslib_index)\n top_words = [result[0] for result in approximate_results]\n nmslib_y_values_accuracy.append(len(set(top_words).intersection(exact_results)))\n nmslib_y_values_query.append(avg_query_time(nmslib_index, queries=queries))\n\n\n# Make a comparison between annoy and nmslib indexer\nplt.figure(1, figsize=(12, 6))\nplt.subplot(121)\nplt.scatter(nmslib_y_values_init, nmslib_y_values_accuracy, label=\"nmslib\", color='r', marker='o')\nplt.scatter(annoy_y_values_init, annoy_y_values_accuracy, label=\"annoy\", color='b', marker='x')\nplt.legend()\nplt.title(\"Initialization time vs accuracy. Upper left is better.\")\nplt.ylabel(\"% accuracy\")\nplt.xlabel(\"Initialization time (s)\")\nplt.subplot(122)\nplt.scatter(nmslib_y_values_query, nmslib_y_values_accuracy, label=\"nmslib\", color='r', marker='o')\nplt.scatter(annoy_y_values_query, annoy_y_values_accuracy, label=\"annoy\", color='b', marker='x')\nplt.legend()\nplt.title(\"Query time vs accuracy. Upper left is better.\")\nplt.ylabel(\"% accuracy\")\nplt.xlabel(\"Query time (s)\")\nplt.xlim(min(nmslib_y_values_query+annoy_y_values_query), max(nmslib_y_values_query+annoy_y_values_query))\nplt.tight_layout()\nplt.show()",
"6. Work with Google word2vec files\nOur model can be exported to a word2vec C format. There is a binary and a plain text word2vec format. Both can be read with a variety of other software, or imported back into gensim as a KeyedVectors object.",
"# To export our model as text\nmodel.wv.save_word2vec_format('/tmp/vectors.txt', binary=False)\n\nfrom smart_open import open\n# View the first 3 lines of the exported file\n\n# The first line has the total number of entries and the vector dimension count. \n# The next lines have a key (a string) followed by its vector.\nwith open('/tmp/vectors.txt') as myfile:\n for i in range(3):\n print(myfile.readline().strip())\n\n# To import a word2vec text model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.txt', binary=False)\n\n# To export our model as binary\nmodel.wv.save_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To import a word2vec binary model\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\n\n# To create and save Nmslib Index from a loaded `KeyedVectors` object \nnmslib_index = NmslibIndexer(wv, \n {'M': 100, 'indexThreadQty': 1, 'efConstruction': 100}, {'efSearch': 100})\nnmslib_index.save('/tmp/mymodel.index')\n\n# Load and test the saved word vectors and saved nmslib index\nwv = KeyedVectors.load_word2vec_format('/tmp/vectors.bin', binary=True)\nnmslib_index = NmslibIndexer.load('/tmp/mymodel.index')\nnmslib_index.model = wv\n\nvector = wv[\"cat\"]\napproximate_neighbors = wv.most_similar([vector], topn=11, indexer=nmslib_index)\n# Neatly print the approximate_neighbors and their corresponding cosine similarity values\nprint(\"Approximate Neighbors\")\nfor neighbor in approximate_neighbors:\n print(neighbor)\n\nnormal_neighbors = wv.most_similar([vector], topn=11)\nprint(\"\\nNormal (not Nmslib-indexed) Neighbors\")\nfor neighbor in normal_neighbors:\n print(neighbor)",
"Recap\nIn this notebook we used the Nmslib module to build an indexed approximation of our word embeddings. To do so, we did the following steps:\n1. Download Text8 Corpus\n2. Build Word2Vec Model\n3. Construct NmslibIndex with model & make a similarity query\n4. Verify & Evaluate performance\n5. Evaluate relationship of parameters to initialization/query time and accuracy, compared with annoy\n6. Work with Google's word2vec C formats"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dipanjanS/BerkeleyX-CS100.1x-Big-Data-with-Apache-Spark | Week 2 - Introduction to Apache Spark/lab1_word_count_student.ipynb | mit | [
"+ \nWord Count Lab: Building a word count application\nThis lab will build on the techniques covered in the Spark tutorial to develop a simple word count application. The volume of unstructured text in existence is growing dramatically, and Spark is an excellent tool for analyzing this type of data. In this lab, we will write code that calculates the most common words in the Complete Works of William Shakespeare retrieved from Project Gutenberg. This could also be scaled to find the most common words on the Internet.\n During this lab we will cover: \nPart 1: Creating a base RDD and pair RDDs\nPart 2: Counting with pair RDDs\nPart 3: Finding unique words and a mean value\nPart 4: Apply word count to a file\nNote that, for reference, you can look up the details of the relevant methods in Spark's Python API\n Part 1: Creating a base RDD and pair RDDs \nIn this part of the lab, we will explore creating a base RDD with parallelize and using pair RDDs to count words.\n (1a) Create a base RDD \nWe'll start by generating a base RDD by using a Python list and the sc.parallelize method. Then we'll print out the type of the base RDD.",
"wordsList = ['cat', 'elephant', 'rat', 'rat', 'cat']\nwordsRDD = sc.parallelize(wordsList, 4)\n# Print out the type of wordsRDD\nprint type(wordsRDD)",
"(1b) Pluralize and test \nLet's use a map() transformation to add the letter 's' to each string in the base RDD we just created. We'll define a Python function that returns the word with an 's' at the end of the word. Please replace <FILL IN> with your solution. If you have trouble, the next cell has the solution. After you have defined makePlural you can run the third cell which contains a test. If you implementation is correct it will print 1 test passed.\nThis is the general form that exercises will take, except that no example solution will be provided. Exercises will include an explanation of what is expected, followed by code cells where one cell will have one or more <FILL IN> sections. The cell that needs to be modified will have # TODO: Replace <FILL IN> with appropriate code on its first line. Once the <FILL IN> sections are updated and the code is run, the test cell can then be run to verify the correctness of your solution. The last code cell before the next markdown section will contain the tests.",
"# TODO: Replace <FILL IN> with appropriate code\ndef makePlural(word):\n \"\"\"Adds an 's' to `word`.\n\n Note:\n This is a simple function that only adds an 's'. No attempt is made to follow proper\n pluralization rules.\n\n Args:\n word (str): A string.\n\n Returns:\n str: A string with 's' added to it.\n \"\"\"\n return word + 's'\n\nprint makePlural('cat')\n\n# One way of completing the function\ndef makePlural(word):\n return word + 's'\n\nprint makePlural('cat')\n\n# Load in the testing code and check to see if your answer is correct\n# If incorrect it will report back '1 test failed' for each failed test\n# Make sure to rerun any cell you change before trying the test again\nfrom test_helper import Test\n# TEST Pluralize and test (1b)\nTest.assertEquals(makePlural('rat'), 'rats', 'incorrect result: makePlural does not add an s')",
"(1c) Apply makePlural to the base RDD \nNow pass each item in the base RDD into a map() transformation that applies the makePlural() function to each element. And then call the collect() action to see the transformed RDD.",
"# TODO: Replace <FILL IN> with appropriate code\npluralRDD = wordsRDD.map(makePlural)\nprint pluralRDD.collect()\n\n# TEST Apply makePlural to the base RDD(1c)\nTest.assertEquals(pluralRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],\n 'incorrect values for pluralRDD')",
"(1d) Pass a lambda function to map \nLet's create the same RDD using a lambda function.",
"# TODO: Replace <FILL IN> with appropriate code\npluralLambdaRDD = wordsRDD.map(lambda word: word + 's')\nprint pluralLambdaRDD.collect()\n\n# TEST Pass a lambda function to map (1d)\nTest.assertEquals(pluralLambdaRDD.collect(), ['cats', 'elephants', 'rats', 'rats', 'cats'],\n 'incorrect values for pluralLambdaRDD (1d)')",
"(1e) Length of each word \nNow use map() and a lambda function to return the number of characters in each word. We'll collect this result directly into a variable.",
"# TODO: Replace <FILL IN> with appropriate code\npluralLengths = (pluralRDD\n .map(lambda word: len(word))\n .collect())\nprint pluralLengths\n\n# TEST Length of each word (1e)\nTest.assertEquals(pluralLengths, [4, 9, 4, 4, 4],\n 'incorrect values for pluralLengths')",
"(1f) Pair RDDs \nThe next step in writing our word counting program is to create a new type of RDD, called a pair RDD. A pair RDD is an RDD where each element is a pair tuple (k, v) where k is the key and v is the value. In this example, we will create a pair consisting of ('<word>', 1) for each word element in the RDD.\nWe can create the pair RDD using the map() transformation with a lambda() function to create a new RDD.",
"# TODO: Replace <FILL IN> with appropriate code\nwordPairs = wordsRDD.map(lambda word: (word, 1))\nprint wordPairs.collect()\n\n# TEST Pair RDDs (1f)\nTest.assertEquals(wordPairs.collect(),\n [('cat', 1), ('elephant', 1), ('rat', 1), ('rat', 1), ('cat', 1)],\n 'incorrect value for wordPairs')",
"Part 2: Counting with pair RDDs \nNow, let's count the number of times a particular word appears in the RDD. There are multiple ways to perform the counting, but some are much less efficient than others.\nA naive approach would be to collect() all of the elements and count them in the driver program. While this approach could work for small datasets, we want an approach that will work for any size dataset including terabyte- or petabyte-sized datasets. In addition, performing all of the work in the driver program is slower than performing it in parallel in the workers. For these reasons, we will use data parallel operations.\n (2a) groupByKey() approach \nAn approach you might first consider (we'll see shortly that there are better ways) is based on using the groupByKey() transformation. As the name implies, the groupByKey() transformation groups all the elements of the RDD with the same key into a single list in one of the partitions. There are two problems with using groupByKey():\n\n\nThe operation requires a lot of data movement to move all the values into the appropriate partitions.\n\n\nThe lists can be very large. Consider a word count of English Wikipedia: the lists for common words (e.g., the, a, etc.) would be huge and could exhaust the available memory in a worker.\n\n\nUse groupByKey() to generate a pair RDD of type ('word', iterator).",
"# TODO: Replace <FILL IN> with appropriate code\n# Note that groupByKey requires no parameters\nwordsGrouped = wordPairs.groupByKey()\nfor key, value in wordsGrouped.collect():\n print '{0}: {1}'.format(key, list(value))\n\n# TEST groupByKey() approach (2a)\nTest.assertEquals(sorted(wordsGrouped.mapValues(lambda x: list(x)).collect()),\n [('cat', [1, 1]), ('elephant', [1]), ('rat', [1, 1])],\n 'incorrect value for wordsGrouped')",
"(2b) Use groupByKey() to obtain the counts \nUsing the groupByKey() transformation creates an RDD containing 3 elements, each of which is a pair of a word and a Python iterator.\nNow sum the iterator using a map() transformation. The result should be a pair RDD consisting of (word, count) pairs.",
"# TODO: Replace <FILL IN> with appropriate code\nwordCountsGrouped = wordsGrouped.map(lambda (k,v): (k, sum(v)))\nprint wordCountsGrouped.collect()\n\n# TEST Use groupByKey() to obtain the counts (2b)\nTest.assertEquals(sorted(wordCountsGrouped.collect()),\n [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect value for wordCountsGrouped')",
"(2c) Counting using reduceByKey \nA better approach is to start from the pair RDD and then use the reduceByKey() transformation to create a new pair RDD. The reduceByKey() transformation gathers together pairs that have the same key and applies the function provided to two values at a time, iteratively reducing all of the values to a single value. reduceByKey() operates by applying the function first within each partition on a per-key basis and then across the partitions, allowing it to scale efficiently to large datasets.",
"# TODO: Replace <FILL IN> with appropriate code\n# Note that reduceByKey takes in a function that accepts two values and returns a single value\n\nwordCounts = wordPairs.reduceByKey(lambda a,b: a+b)\nprint wordCounts.collect()\n\n# TEST Counting using reduceByKey (2c)\nTest.assertEquals(sorted(wordCounts.collect()), [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect value for wordCounts')",
"(2d) All together \nThe expert version of the code performs the map() to pair RDD, reduceByKey() transformation, and collect in one statement.",
"# TODO: Replace <FILL IN> with appropriate code\nwordCountsCollected = (wordsRDD\n .map(lambda word: (word, 1))\n .reduceByKey(lambda a,b: a+b)\n .collect())\nprint wordCountsCollected\n\n# TEST All together (2d)\nTest.assertEquals(sorted(wordCountsCollected), [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect value for wordCountsCollected')",
"Part 3: Finding unique words and a mean value \n (3a) Unique words \nCalculate the number of unique words in wordsRDD. You can use other RDDs that you have already created to make this easier.",
"# TODO: Replace <FILL IN> with appropriate code\nuniqueWords = wordsRDD.map(lambda word: (word, 1)).distinct().count()\nprint uniqueWords\n\n# TEST Unique words (3a)\nTest.assertEquals(uniqueWords, 3, 'incorrect count of uniqueWords')",
"(3b) Mean using reduce \nFind the mean number of words per unique word in wordCounts.\nUse a reduce() action to sum the counts in wordCounts and then divide by the number of unique words. First map() the pair RDD wordCounts, which consists of (key, value) pairs, to an RDD of values.",
"# TODO: Replace <FILL IN> with appropriate code\nfrom operator import add\n\ntotalCount = (wordCounts\n .map(lambda (a,b): b)\n .reduce(add))\naverage = totalCount / float(wordCounts.distinct().count())\nprint totalCount\nprint round(average, 2)\n\n# TEST Mean using reduce (3b)\nTest.assertEquals(round(average, 2), 1.67, 'incorrect value of average')",
"Part 4: Apply word count to a file \nIn this section we will finish developing our word count application. We'll have to build the wordCount function, deal with real world problems like capitalization and punctuation, load in our data source, and compute the word count on the new data.\n (4a) wordCount function \nFirst, define a function for word counting. You should reuse the techniques that have been covered in earlier parts of this lab. This function should take in an RDD that is a list of words like wordsRDD and return a pair RDD that has all of the words and their associated counts.",
"# TODO: Replace <FILL IN> with appropriate code\ndef wordCount(wordListRDD):\n \"\"\"Creates a pair RDD with word counts from an RDD of words.\n\n Args:\n wordListRDD (RDD of str): An RDD consisting of words.\n\n Returns:\n RDD of (str, int): An RDD consisting of (word, count) tuples.\n \"\"\"\n return (wordListRDD\n .map(lambda a : (a,1))\n .reduceByKey(lambda a,b: a+b))\nprint wordCount(wordsRDD).collect()\n\n# TEST wordCount function (4a)\nTest.assertEquals(sorted(wordCount(wordsRDD).collect()),\n [('cat', 2), ('elephant', 1), ('rat', 2)],\n 'incorrect definition for wordCount function')",
"(4b) Capitalization and punctuation \nReal world files are more complicated than the data we have been using in this lab. Some of the issues we have to address are:\n\n\nWords should be counted independent of their capitialization (e.g., Spark and spark should be counted as the same word).\n\n\nAll punctuation should be removed.\n\n\nAny leading or trailing spaces on a line should be removed.\n\n\nDefine the function removePunctuation that converts all text to lower case, removes any punctuation, and removes leading and trailing spaces. Use the Python re module to remove any text that is not a letter, number, or space. Reading help(re.sub) might be useful.",
"# TODO: Replace <FILL IN> with appropriate code\nimport re\ndef removePunctuation(text):\n \"\"\"Removes punctuation, changes to lower case, and strips leading and trailing spaces.\n\n Note:\n Only spaces, letters, and numbers should be retained. Other characters should should be\n eliminated (e.g. it's becomes its). Leading and trailing spaces should be removed after\n punctuation is removed.\n\n Args:\n text (str): A string.\n\n Returns:\n str: The cleaned up string.\n \"\"\"\n return re.sub(\"[^a-zA-Z0-9 ]\", \"\", text.strip(\" \").lower())\nprint removePunctuation('Hi, you!')\nprint removePunctuation(' No under_score!')\n\n# TEST Capitalization and punctuation (4b)\nTest.assertEquals(removePunctuation(\" The Elephant's 4 cats. \"),\n 'the elephants 4 cats',\n 'incorrect definition for removePunctuation function')",
"(4c) Load a text file \nFor the next part of this lab, we will use the Complete Works of William Shakespeare from Project Gutenberg. To convert a text file into an RDD, we use the SparkContext.textFile() method. We also apply the recently defined removePunctuation() function using a map() transformation to strip out the punctuation and change all text to lowercase. Since the file is large we use take(15), so that we only print 15 lines.",
"# Just run this code\nimport os.path\nbaseDir = os.path.join('data')\ninputPath = os.path.join('cs100', 'lab1', 'shakespeare.txt')\nfileName = os.path.join(baseDir, inputPath)\n\nshakespeareRDD = (sc\n .textFile(fileName, 8)\n .map(removePunctuation))\nprint '\\n'.join(shakespeareRDD\n .zipWithIndex() # to (line, lineNum)\n .map(lambda (l, num): '{0}: {1}'.format(num, l)) # to 'lineNum: line'\n .take(15))",
"(4d) Words from lines \nBefore we can use the wordcount() function, we have to address two issues with the format of the RDD:\n\n\nThe first issue is that that we need to split each line by its spaces.\n\n\nThe second issue is we need to filter out empty lines.\n\n\nApply a transformation that will split each element of the RDD by its spaces. For each element of the RDD, you should apply Python's string split() function. You might think that a map() transformation is the way to do this, but think about what the result of the split() function will be.",
"# TODO: Replace <FILL IN> with appropriate code\nshakespeareWordsRDD = shakespeareRDD.flatMap(lambda a: a.split(\" \"))\nshakespeareWordCount = shakespeareWordsRDD.count()\nprint shakespeareWordsRDD.top(5)\nprint shakespeareWordCount\n\n# TEST Words from lines (4d)\n# This test allows for leading spaces to be removed either before or after\n# punctuation is removed.\nTest.assertTrue(shakespeareWordCount == 927631 or shakespeareWordCount == 928908,\n 'incorrect value for shakespeareWordCount')\nTest.assertEquals(shakespeareWordsRDD.top(5),\n [u'zwaggerd', u'zounds', u'zounds', u'zounds', u'zounds'],\n 'incorrect value for shakespeareWordsRDD')",
"(4e) Remove empty elements \nThe next step is to filter out the empty elements. Remove all entries where the word is ''.",
"# TODO: Replace <FILL IN> with appropriate code\nshakeWordsRDD = shakespeareWordsRDD.filter(lambda word: len(word) > 0)\nshakeWordCount = shakeWordsRDD.count()\nprint shakeWordCount\n\n# TEST Remove empty elements (4e)\nTest.assertEquals(shakeWordCount, 882996, 'incorrect value for shakeWordCount')",
"(4f) Count the words \nWe now have an RDD that is only words. Next, let's apply the wordCount() function to produce a list of word counts. We can view the top 15 words by using the takeOrdered() action; however, since the elements of the RDD are pairs, we need a custom sort function that sorts using the value part of the pair.\nYou'll notice that many of the words are common English words. These are called stopwords. In a later lab, we will see how to eliminate them from the results.\nUse the wordCount() function and takeOrdered() to obtain the fifteen most common words and their counts.",
"# TODO: Replace <FILL IN> with appropriate code\ntop15WordsAndCounts = wordCount(shakeWordsRDD).takeOrdered(15, lambda (a,b): -b)\nprint '\\n'.join(map(lambda (w, c): '{0}: {1}'.format(w, c), top15WordsAndCounts))\n\n# TEST Count the words (4f)\nTest.assertEquals(top15WordsAndCounts,\n [(u'the', 27361), (u'and', 26028), (u'i', 20681), (u'to', 19150), (u'of', 17463),\n (u'a', 14593), (u'you', 13615), (u'my', 12481), (u'in', 10956), (u'that', 10890),\n (u'is', 9134), (u'not', 8497), (u'with', 7771), (u'me', 7769), (u'it', 7678)],\n 'incorrect value for top15WordsAndCounts')"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
guillaume-chevalier/LSTM-Human-Activity-Recognition | LSTM.ipynb | mit | [
"<a title=\"Activity Recognition\" href=\"https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition\" > LSTMs for Human Activity Recognition</a>\nHuman Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amongst six categories:\n- WALKING,\n- WALKING_UPSTAIRS,\n- WALKING_DOWNSTAIRS,\n- SITTING,\n- STANDING,\n- LAYING.\nCompared to a classical approach, using a Recurrent Neural Networks (RNN) with Long Short-Term Memory cells (LSTMs) require no or almost no feature engineering. Data can be fed directly into the neural network who acts like a black box, modeling the problem correctly. Other research on the activity recognition dataset can use a big amount of feature engineering, which is rather a signal processing approach combined with classical data science techniques. The approach here is rather very simple in terms of how much was the data preprocessed. \nLet's use Google's neat Deep Learning library, TensorFlow, demonstrating the usage of an LSTM, a type of Artificial Neural Network that can process sequential data / time series. \nVideo dataset overview\nFollow this link to see a video of the 6 activities recorded in the experiment with one of the participants:\n<p align=\"center\">\n <a href=\"http://www.youtube.com/watch?feature=player_embedded&v=XOEN9W05_4A\n\" target=\"_blank\"><img src=\"http://img.youtube.com/vi/XOEN9W05_4A/0.jpg\" \nalt=\"Video of the experiment\" width=\"400\" height=\"300\" border=\"10\" /></a>\n <a href=\"https://youtu.be/XOEN9W05_4A\"><center>[Watch video]</center></a>\n</p>\n\nDetails about the input data\nI will be using an LSTM on the data to learn (as a cellphone attached on the waist) to recognise the type of activity that the user is doing. The dataset's description goes like this:\n\nThe sensor signals (accelerometer and gyroscope) were pre-processed by applying noise filters and then sampled in fixed-width sliding windows of 2.56 sec and 50% overlap (128 readings/window). The sensor acceleration signal, which has gravitational and body motion components, was separated using a Butterworth low-pass filter into body acceleration and gravity. The gravitational force is assumed to have only low frequency components, therefore a filter with 0.3 Hz cutoff frequency was used. \n\nThat said, I will use the almost raw data: only the gravity effect has been filtered out of the accelerometer as a preprocessing step for another 3D feature as an input to help learning. If you'd ever want to extract the gravity by yourself, you could fork my code on using a Butterworth Low-Pass Filter (LPF) in Python and edit it to have the right cutoff frequency of 0.3 Hz which is a good frequency for activity recognition from body sensors.\nWhat is an RNN?\nAs explained in this article, an RNN takes many input vectors to process them and output other vectors. It can be roughly pictured like in the image below, imagining each rectangle has a vectorial depth and other special hidden quirks in the image below. In our case, the \"many to one\" architecture is used: we accept time series of feature vectors (one vector per time step) to convert them to a probability vector at the output for classification. Note that a \"one to one\" architecture would be a standard feedforward neural network. \n\n<a href=\"https://www.dl-rnn-course.neuraxio.com/start?utm_source=github_lstm\" ><img src=\"https://raw.githubusercontent.com/Neuraxio/Machine-Learning-Figures/master/rnn-architectures.png\" /></a>\nLearn more on RNNs\n\nWhat is an LSTM?\nAn LSTM is an improved RNN. It is more complex, but easier to train, avoiding what is called the vanishing gradient problem. I recommend this course for you to learn more on LSTMs.\n\nLearn more on LSTMs\n\nResults\nScroll on! Nice visuals awaits.",
"# All Includes\n\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nimport tensorflow as tf # Version 1.0.0 (some previous versions are used in past commits)\nfrom sklearn import metrics\n\nimport os\n\n# Useful Constants\n\n# Those are separate normalised input features for the neural network\nINPUT_SIGNAL_TYPES = [\n \"body_acc_x_\",\n \"body_acc_y_\",\n \"body_acc_z_\",\n \"body_gyro_x_\",\n \"body_gyro_y_\",\n \"body_gyro_z_\",\n \"total_acc_x_\",\n \"total_acc_y_\",\n \"total_acc_z_\"\n]\n\n# Output classes to learn how to classify\nLABELS = [\n \"WALKING\", \n \"WALKING_UPSTAIRS\", \n \"WALKING_DOWNSTAIRS\", \n \"SITTING\", \n \"STANDING\", \n \"LAYING\"\n] \n",
"Let's start by downloading the data:",
"# Note: Linux bash commands start with a \"!\" inside those \"ipython notebook\" cells\n\nDATA_PATH = \"data/\"\n\n!pwd && ls\nos.chdir(DATA_PATH)\n!pwd && ls\n\n!python download_dataset.py\n\n!pwd && ls\nos.chdir(\"..\")\n!pwd && ls\n\nDATASET_PATH = DATA_PATH + \"UCI HAR Dataset/\"\nprint(\"\\n\" + \"Dataset is now located at: \" + DATASET_PATH)\n",
"Preparing dataset:",
"TRAIN = \"train/\"\nTEST = \"test/\"\n\n\n# Load \"X\" (the neural network's training and testing inputs)\n\ndef load_X(X_signals_paths):\n X_signals = []\n \n for signal_type_path in X_signals_paths:\n file = open(signal_type_path, 'r')\n # Read dataset from disk, dealing with text files' syntax\n X_signals.append(\n [np.array(serie, dtype=np.float32) for serie in [\n row.replace(' ', ' ').strip().split(' ') for row in file\n ]]\n )\n file.close()\n \n return np.transpose(np.array(X_signals), (1, 2, 0))\n\nX_train_signals_paths = [\n DATASET_PATH + TRAIN + \"Inertial Signals/\" + signal + \"train.txt\" for signal in INPUT_SIGNAL_TYPES\n]\nX_test_signals_paths = [\n DATASET_PATH + TEST + \"Inertial Signals/\" + signal + \"test.txt\" for signal in INPUT_SIGNAL_TYPES\n]\n\nX_train = load_X(X_train_signals_paths)\nX_test = load_X(X_test_signals_paths)\n\n\n# Load \"y\" (the neural network's training and testing outputs)\n\ndef load_y(y_path):\n file = open(y_path, 'r')\n # Read dataset from disk, dealing with text file's syntax\n y_ = np.array(\n [elem for elem in [\n row.replace(' ', ' ').strip().split(' ') for row in file\n ]], \n dtype=np.int32\n )\n file.close()\n \n # Substract 1 to each output class for friendly 0-based indexing \n return y_ - 1\n\ny_train_path = DATASET_PATH + TRAIN + \"y_train.txt\"\ny_test_path = DATASET_PATH + TEST + \"y_test.txt\"\n\ny_train = load_y(y_train_path)\ny_test = load_y(y_test_path)\n",
"Additionnal Parameters:\nHere are some core parameter definitions for the training. \nFor example, the whole neural network's structure could be summarised by enumerating those parameters and the fact that two LSTM are used one on top of another (stacked) output-to-input as hidden layers through time steps.",
"# Input Data \n\ntraining_data_count = len(X_train) # 7352 training series (with 50% overlap between each serie)\ntest_data_count = len(X_test) # 2947 testing series\nn_steps = len(X_train[0]) # 128 timesteps per series\nn_input = len(X_train[0][0]) # 9 input parameters per timestep\n\n\n# LSTM Neural Network's internal structure\n\nn_hidden = 32 # Hidden layer num of features\nn_classes = 6 # Total classes (should go up, or should go down)\n\n\n# Training \n\nlearning_rate = 0.0025\nlambda_loss_amount = 0.0015\ntraining_iters = training_data_count * 300 # Loop 300 times on the dataset\nbatch_size = 1500\ndisplay_iter = 30000 # To show test set accuracy during training\n\n\n# Some debugging info\n\nprint(\"Some useful info to get an insight on dataset's shape and normalisation:\")\nprint(\"(X shape, y shape, every X's mean, every X's standard deviation)\")\nprint(X_test.shape, y_test.shape, np.mean(X_test), np.std(X_test))\nprint(\"The dataset is therefore properly normalised, as expected, but not yet one-hot encoded.\")\n",
"Utility functions for training:",
"def LSTM_RNN(_X, _weights, _biases):\n # Function returns a tensorflow LSTM (RNN) artificial neural network from given parameters. \n # Moreover, two LSTM cells are stacked which adds deepness to the neural network. \n # Note, some code of this notebook is inspired from an slightly different \n # RNN architecture used on another dataset, some of the credits goes to \n # \"aymericdamien\" under the MIT license.\n\n # (NOTE: This step could be greatly optimised by shaping the dataset once\n # input shape: (batch_size, n_steps, n_input)\n _X = tf.transpose(_X, [1, 0, 2]) # permute n_steps and batch_size\n # Reshape to prepare input to hidden activation\n _X = tf.reshape(_X, [-1, n_input]) \n # new shape: (n_steps*batch_size, n_input)\n \n # ReLU activation, thanks to Yu Zhao for adding this improvement here:\n _X = tf.nn.relu(tf.matmul(_X, _weights['hidden']) + _biases['hidden'])\n # Split data because rnn cell needs a list of inputs for the RNN inner loop\n _X = tf.split(_X, n_steps, 0) \n # new shape: n_steps * (batch_size, n_hidden)\n\n # Define two stacked LSTM cells (two recurrent layers deep) with tensorflow\n lstm_cell_1 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)\n lstm_cell_2 = tf.contrib.rnn.BasicLSTMCell(n_hidden, forget_bias=1.0, state_is_tuple=True)\n lstm_cells = tf.contrib.rnn.MultiRNNCell([lstm_cell_1, lstm_cell_2], state_is_tuple=True)\n # Get LSTM cell output\n outputs, states = tf.contrib.rnn.static_rnn(lstm_cells, _X, dtype=tf.float32)\n\n # Get last time step's output feature for a \"many-to-one\" style classifier, \n # as in the image describing RNNs at the top of this page\n lstm_last_output = outputs[-1]\n \n # Linear activation\n return tf.matmul(lstm_last_output, _weights['out']) + _biases['out']\n\n\ndef extract_batch_size(_train, step, batch_size):\n # Function to fetch a \"batch_size\" amount of data from \"(X|y)_train\" data. \n \n shape = list(_train.shape)\n shape[0] = batch_size\n batch_s = np.empty(shape)\n\n for i in range(batch_size):\n # Loop index\n index = ((step-1)*batch_size + i) % len(_train)\n batch_s[i] = _train[index] \n\n return batch_s\n\n\ndef one_hot(y_, n_classes=n_classes):\n # Function to encode neural one-hot output labels from number indexes \n # e.g.: \n # one_hot(y_=[[5], [0], [3]], n_classes=6):\n # return [[0, 0, 0, 0, 0, 1], [1, 0, 0, 0, 0, 0], [0, 0, 0, 1, 0, 0]]\n \n y_ = y_.reshape(len(y_))\n return np.eye(n_classes)[np.array(y_, dtype=np.int32)] # Returns FLOATS\n",
"Let's get serious and build the neural network:",
"\n# Graph input/output\nx = tf.placeholder(tf.float32, [None, n_steps, n_input])\ny = tf.placeholder(tf.float32, [None, n_classes])\n\n# Graph weights\nweights = {\n 'hidden': tf.Variable(tf.random_normal([n_input, n_hidden])), # Hidden layer weights\n 'out': tf.Variable(tf.random_normal([n_hidden, n_classes], mean=1.0))\n}\nbiases = {\n 'hidden': tf.Variable(tf.random_normal([n_hidden])),\n 'out': tf.Variable(tf.random_normal([n_classes]))\n}\n\npred = LSTM_RNN(x, weights, biases)\n\n# Loss, optimizer and evaluation\nl2 = lambda_loss_amount * sum(\n tf.nn.l2_loss(tf_var) for tf_var in tf.trainable_variables()\n) # L2 loss prevents this overkill neural network to overfit the data\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y, logits=pred)) + l2 # Softmax loss\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost) # Adam Optimizer\n\ncorrect_pred = tf.equal(tf.argmax(pred,1), tf.argmax(y,1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))\n",
"Hooray, now train the neural network:",
"# To keep track of training's performance\ntest_losses = []\ntest_accuracies = []\ntrain_losses = []\ntrain_accuracies = []\n\n# Launch the graph\nsess = tf.InteractiveSession(config=tf.ConfigProto(log_device_placement=True))\ninit = tf.global_variables_initializer()\nsess.run(init)\n\n# Perform Training steps with \"batch_size\" amount of example data at each loop\nstep = 1\nwhile step * batch_size <= training_iters:\n batch_xs = extract_batch_size(X_train, step, batch_size)\n batch_ys = one_hot(extract_batch_size(y_train, step, batch_size))\n\n # Fit training using batch data\n _, loss, acc = sess.run(\n [optimizer, cost, accuracy],\n feed_dict={\n x: batch_xs, \n y: batch_ys\n }\n )\n train_losses.append(loss)\n train_accuracies.append(acc)\n \n # Evaluate network only at some steps for faster training: \n if (step*batch_size % display_iter == 0) or (step == 1) or (step * batch_size > training_iters):\n \n # To not spam console, show training accuracy/loss in this \"if\"\n print(\"Training iter #\" + str(step*batch_size) + \\\n \": Batch Loss = \" + \"{:.6f}\".format(loss) + \\\n \", Accuracy = {}\".format(acc))\n \n # Evaluation on the test set (no learning made here - just evaluation for diagnosis)\n loss, acc = sess.run(\n [cost, accuracy], \n feed_dict={\n x: X_test,\n y: one_hot(y_test)\n }\n )\n test_losses.append(loss)\n test_accuracies.append(acc)\n print(\"PERFORMANCE ON TEST SET: \" + \\\n \"Batch Loss = {}\".format(loss) + \\\n \", Accuracy = {}\".format(acc))\n\n step += 1\n\nprint(\"Optimization Finished!\")\n\n# Accuracy for test data\n\none_hot_predictions, accuracy, final_loss = sess.run(\n [pred, accuracy, cost],\n feed_dict={\n x: X_test,\n y: one_hot(y_test)\n }\n)\n\ntest_losses.append(final_loss)\ntest_accuracies.append(accuracy)\n\nprint(\"FINAL RESULT: \" + \\\n \"Batch Loss = {}\".format(final_loss) + \\\n \", Accuracy = {}\".format(accuracy))\n",
"Training is good, but having visual insight is even better:\nOkay, let's plot this simply in the notebook for now.",
"# (Inline plots: )\n%matplotlib inline\n\nfont = {\n 'family' : 'Bitstream Vera Sans',\n 'weight' : 'bold',\n 'size' : 18\n}\nmatplotlib.rc('font', **font)\n\nwidth = 12\nheight = 12\nplt.figure(figsize=(width, height))\n\nindep_train_axis = np.array(range(batch_size, (len(train_losses)+1)*batch_size, batch_size))\nplt.plot(indep_train_axis, np.array(train_losses), \"b--\", label=\"Train losses\")\nplt.plot(indep_train_axis, np.array(train_accuracies), \"g--\", label=\"Train accuracies\")\n\nindep_test_axis = np.append(\n np.array(range(batch_size, len(test_losses)*display_iter, display_iter)[:-1]),\n [training_iters]\n)\nplt.plot(indep_test_axis, np.array(test_losses), \"b-\", label=\"Test losses\")\nplt.plot(indep_test_axis, np.array(test_accuracies), \"g-\", label=\"Test accuracies\")\n\nplt.title(\"Training session's progress over iterations\")\nplt.legend(loc='upper right', shadow=True)\nplt.ylabel('Training Progress (Loss or Accuracy values)')\nplt.xlabel('Training iteration')\n\nplt.show()",
"And finally, the multi-class confusion matrix and metrics!",
"# Results\n\npredictions = one_hot_predictions.argmax(1)\n\nprint(\"Testing Accuracy: {}%\".format(100*accuracy))\n\nprint(\"\")\nprint(\"Precision: {}%\".format(100*metrics.precision_score(y_test, predictions, average=\"weighted\")))\nprint(\"Recall: {}%\".format(100*metrics.recall_score(y_test, predictions, average=\"weighted\")))\nprint(\"f1_score: {}%\".format(100*metrics.f1_score(y_test, predictions, average=\"weighted\")))\n\nprint(\"\")\nprint(\"Confusion Matrix:\")\nconfusion_matrix = metrics.confusion_matrix(y_test, predictions)\nprint(confusion_matrix)\nnormalised_confusion_matrix = np.array(confusion_matrix, dtype=np.float32)/np.sum(confusion_matrix)*100\n\nprint(\"\")\nprint(\"Confusion matrix (normalised to % of total test data):\")\nprint(normalised_confusion_matrix)\nprint(\"Note: training and testing data is not equally distributed amongst classes, \")\nprint(\"so it is normal that more than a 6th of the data is correctly classifier in the last category.\")\n\n# Plot Results: \nwidth = 12\nheight = 12\nplt.figure(figsize=(width, height))\nplt.imshow(\n normalised_confusion_matrix, \n interpolation='nearest', \n cmap=plt.cm.rainbow\n)\nplt.title(\"Confusion matrix \\n(normalised to % of total test data)\")\nplt.colorbar()\ntick_marks = np.arange(n_classes)\nplt.xticks(tick_marks, LABELS, rotation=90)\nplt.yticks(tick_marks, LABELS)\nplt.tight_layout()\nplt.ylabel('True label')\nplt.xlabel('Predicted label')\nplt.show()\n\nsess.close()",
"Conclusion\nOutstandingly, the final accuracy is of 91%! And it can peak to values such as 93.25%, at some moments of luck during the training, depending on how the neural network's weights got initialized at the start of the training, randomly. \nThis means that the neural networks is almost always able to correctly identify the movement type! Remember, the phone is attached on the waist and each series to classify has just a 128 sample window of two internal sensors (a.k.a. 2.56 seconds at 50 FPS), so it amazes me how those predictions are extremely accurate given this small window of context and raw data. I've validated and re-validated that there is no important bug, and the community used and tried this code a lot. (Note: be sure to report something in the issue tab if you find bugs, otherwise Quora, StackOverflow, and other StackExchange sites are the places for asking questions.)\nI specially did not expect such good results for guessing between the labels \"SITTING\" and \"STANDING\". Those are seemingly almost the same thing from the point of view of a device placed at waist level according to how the dataset was originally gathered. Thought, it is still possible to see a little cluster on the matrix between those classes, which drifts away just a bit from the identity. This is great.\nIt is also possible to see that there was a slight difficulty in doing the difference between \"WALKING\", \"WALKING_UPSTAIRS\" and \"WALKING_DOWNSTAIRS\". Obviously, those activities are quite similar in terms of movements. \nI also tried my code without the gyroscope, using only the 3D accelerometer's 6 features (and not changing the training hyperparameters), and got an accuracy of 87%. In general, gyroscopes consumes more power than accelerometers, so it is preferable to turn them off. \nImprovements\nIn another open-source repository of mine, the accuracy is pushed up to nearly 94% using a special deep LSTM architecture which combines the concepts of bidirectional RNNs, residual connections, and stacked cells. This architecture is also tested on another similar activity dataset. It resembles the nice architecture used in \"Google’s Neural Machine Translation System: Bridging the Gap between Human and Machine Translation\", without an attention mechanism, and with just the encoder part - as a \"many to one\" architecture instead of a \"many to many\" to be adapted to the Human Activity Recognition (HAR) problem. I also worked more on the problem and came up with the LARNN, however it's complicated for just a little gain. Thus the current, original activity recognition project is simply better to use for its outstanding simplicity. \nIf you want to learn more about deep learning, I have also built a list of the learning ressources for deep learning which have revealed to be the most useful to me here. \nReferences\nThe dataset can be found on the UCI Machine Learning Repository: \n\nDavide Anguita, Alessandro Ghio, Luca Oneto, Xavier Parra and Jorge L. Reyes-Ortiz. A Public Domain Dataset for Human Activity Recognition Using Smartphones. 21th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2013. Bruges, Belgium 24-26 April 2013.\n\nCitation\nCopyright (c) 2016 Guillaume Chevalier. To cite my code, you can point to the URL of the GitHub repository, for example: \n\nGuillaume Chevalier, LSTMs for Human Activity Recognition, 2016, \nhttps://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition\n\nMy code is available for free and even for private usage for anyone under the MIT License, however I ask to cite for using the code. \nHere is the BibTeX citation code: \n@misc{chevalier2016lstms,\n title={LSTMs for human activity recognition},\n author={Chevalier, Guillaume},\n year={2016}\n}\nExtra links\nConnect with me\n\nLinkedIn\nTwitter\nGitHub\nQuora\nYouTube\nDev/Consulting\n\nLiked this project? Did it help you? Leave a star, fork and share the love!\nThis activity recognition project has been seen in:\n\nHacker News 1st page\nAwesome TensorFlow\nTensorFlow World\nAnd more.",
"# Let's convert this notebook to a README automatically for the GitHub project's title page:\n!jupyter nbconvert --to markdown LSTM.ipynb\n!mv LSTM.md README.md"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
telegraphic/allantools | examples/gradev-demo.ipynb | gpl-3.0 | [
"GRADEV: gap robust allan deviation\nNotebook setup & package imports",
"%matplotlib inline\n\nimport pylab as plt\nimport numpy as np",
"Gap robust allan deviation comparison\nCompute the GRADEV of a white phase noise. Compares two different\nscenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.",
"def example1():\n \"\"\"\n Compute the GRADEV of a white phase noise. Compares two different \n scenarios. 1) The original data and 2) ADEV estimate with gap robust ADEV.\n \"\"\"\n N = 1000\n f = 1\n y = np.random.randn(1,N)[0,:]\n x = np.linspace(1,len(y),len(y))\n x_ax, y_ax, err_l,err_h, ns = allan.gradev(y,f,x)\n plt.errorbar(x_ax, y_ax,yerr=[err_l,err_h],label='GRADEV, no gaps')\n \n \n y[np.floor(0.4*N):np.floor(0.6*N)] = np.NaN # Simulate missing data\n x_ax, y_ax, err_l,err_h, ns = allan.gradev(y,f,x)\n plt.errorbar(x_ax, y_ax,yerr=[err_l,err_h], label='GRADEV, with gaps')\n plt.xscale('log')\n plt.yscale('log')\n plt.grid()\n plt.legend()\n plt.xlabel('Tau / s')\n plt.ylabel('Overlapping Allan deviation')\n plt.show()\n\nexample1()",
"White phase noise\nCompute the GRADEV of a nonstationary white phase noise.",
"def example2():\n \"\"\"\n Compute the GRADEV of a nonstationary white phase noise.\n \"\"\"\n N=1000 # number of samples\n f = 1 # data samples per second\n s=1+5/N*np.arange(0,N)\n y=s*np.random.randn(1,N)[0,:]\n x = np.linspace(1,len(y),len(y))\n x_ax, y_ax, err_l, err_h, ns = allan.gradev(y,f,x)\n plt.loglog(x_ax, y_ax,'b.',label=\"No gaps\")\n y[int(0.4*N):int(0.6*N,)] = np.NaN # Simulate missing data\n x_ax, y_ax, err_l, err, ns = allan.gradev(y,f,x)\n plt.loglog(x_ax, y_ax,'g.',label=\"With gaps\")\n plt.grid()\n plt.legend()\n plt.xlabel('Tau / s')\n plt.ylabel('Overlapping Allan deviation')\n plt.show()\n\nexample2()"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ethen8181/machine-learning | model_selection/partial_dependence/partial_dependence.ipynb | mit | [
"<h1>Table of Contents<span class=\"tocSkip\"></span></h1>\n<div class=\"toc\"><ul class=\"toc-item\"><li><span><a href=\"#Partial-Dependence-Plot\" data-toc-modified-id=\"Partial-Dependence-Plot-1\"><span class=\"toc-item-num\">1 </span>Partial Dependence Plot</a></span><ul class=\"toc-item\"><li><span><a href=\"#Individual-Conditional-Expectation-(ICE)-Plot\" data-toc-modified-id=\"Individual-Conditional-Expectation-(ICE)-Plot-1.1\"><span class=\"toc-item-num\">1.1 </span>Individual Conditional Expectation (ICE) Plot</a></span></li><li><span><a href=\"#Implementation\" data-toc-modified-id=\"Implementation-1.2\"><span class=\"toc-item-num\">1.2 </span>Implementation</a></span></li></ul></li><li><span><a href=\"#Reference\" data-toc-modified-id=\"Reference-2\"><span class=\"toc-item-num\">2 </span>Reference</a></span></li></ul></div>",
"# code for loading the format for the notebook\nimport os\n\n# path : store the current path to convert back to it later\npath = os.getcwd()\nos.chdir(os.path.join('..', '..', 'notebook_format'))\n\nfrom formats import load_style\nload_style(css_style = 'custom2.css', plot_style = False)\n\nos.chdir(path)\n\n# 1. magic for inline plot\n# 2. magic to print version\n# 3. magic so that the notebook will reload external python modules\n# 4. magic to enable retina (high resolution) plots\n# https://gist.github.com/minrk/3301035\n%matplotlib inline\n%load_ext watermark\n%load_ext autoreload\n%autoreload 2\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom pathlib import Path\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.model_selection import train_test_split\n\n%watermark -a 'Ethen' -d -t -v -p numpy,pandas,matplotlib,sklearn",
"Partial Dependence Plot\nDuring the talk, Youtube: PyData - Random Forests Best Practices for the Business World, one of the best practices that the speaker mentioned when using tree-based models is to check for directional relationships. When using non-linear machine learning algorithms, such as popular tree-based models random forest and gradient boosted trees, it can be hard to understand the relations between predictors and model outcome as they do not give us handy coefficients like linear-based models. For example, in terms of random forest, all we get is the feature importance. Although based on that information, we can tell which feature is significantly influencing the outcome based on the importance calculation, it does not inform us in which direction is the predictor influencing outcome. In this notebook, we'll be exploring Partial dependence plot (PDP), a model agnostic technique that gives us an approximate directional influence for a given feature that was used in the model. Note much of the explanation is \"borrowed\" from the blog post at the following link, Blog: Introducing PDPbox, this documentation aims to improve upon it by giving a cleaner implementation.\nPartial dependence plot (PDP) aims to visualize the marginal effect of a given predictor towards the model outcome by plotting out the average model outcome in terms of different values of the predictor. Let's first gain some intuition of how it works with a made up example. Assume we have a data set that only contains three data points and three features (A, B, C) as shown below.\n<img src=\"img/pd1.png\" width=\"30%\" height=\"30%\">\nIf we wish to see how feature A is influencing the prediction Y, what PDP does is to generate a new data set as follow. (here we assume that feature A only has three unique values: A1, A2, A3)\n<img src=\"img/pd2.png\" width=\"30%\" height=\"30%\">\nWe then perform the prediction as usual with this new set of data. As we can imagine, PDP would generate num_rows * num_grid_points (here, the number of grid point equals the number of unique values of the target feature, more on this later) number of predictions and average them for each unique value of Feature A.\n<img src=\"img/pd3.png\" width=\"30%\" height=\"30%\">\nIn the end, PDP would only plot out the average predictions for each unique value of our target feature.\n<img src=\"img/pd4.png\" width=\"30%\" height=\"30%\">\nLet's now formalize this idea with some notation. The partial dependence function is defined as:\n$$\n\\begin{align}\n\\hat{f}{x_S}(x_S) = E{x_C} \\left[ f(x_S, x_C) \\right]\n\\end{align}\n$$\nThe term $x_S$ denotes the set of features for which the partial dependence function should be plotting and $x_C$ are all other features that were used in the machine learning model $f$. In other words, if there were $p$ predictors, $S$ is a subset of our $p$ predictors, $S \\subset \\left{ x_1, x_2, \\ldots, x_p \\right}$, $C$ would be complementing $S$ such that $S \\cup C = \\left{x_1, x_2, \\ldots, x_p\\right}$. The function above is then estimated by calculating averages in the training data, which is also known as Monte Carlo method:\n$$\n\\begin{align}\n\\hat{f}{x_S}(x_S) = \\frac{1}{n} \\sum{i=1}^n f(x_S, x_{Ci})\n\\end{align}\n$$\nWhere $\\left{x_{C1}, x_{C2}, \\ldots, x_{CN}\\right}$ are the values of $X_C$ occurring over all observations in the training data. In other words, in order to calculate the partial dependence of a given variable (or variables), the entire training set must be utilized for every set of joint values. For classification, where the machine learning model outputs probabilities, the partial dependence function displays the probability for a certain class given different values for features $x_s$, a straightforward way to handle multi-class problems is to plot one line per class.\nIndividual Conditional Expectation (ICE) Plot\nAs an extension of a PDP, ICE plot visualizes the relationship between a feature and the predicted responses for each observation. While a PDP visualizes the averaged relationship between features and predicted responses, a set of ICE plots disaggregates the averaged information and visualizes an individual dependence for each observation. Hence, instead of only plotting out the average predictions, ICEbox displays all individual lines. (three lines in total in this case)\n<img src=\"img/pd5.png\" width=\"30%\" height=\"30%\">\nThe authors of the Paper: A. Goldstein, A. Kapelner, J. Bleich, E. Pitkin Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation claims with everything displayed in its raw state, any interesting discovers wouldn’t be shielded because of the averaging inherented with PDP. A vivid example from the paper is shown below:\n<img src=\"img/pd6.png\" width=\"50%\" height=\"50%\">\nIn this example, if we only look at the PDP in Figure b, we would think that on average, the feature X2 is not meaningfully associated with the our target response variable Y. However, if judging from the scatter plot showed in Figure a, this conclusion is plainly wrong. Now if we were to plot out the individual estimated conditional expectation curves, everything becomes more obvious.\n<img src=\"img/pd7.png\" width=\"30%\" height=\"30%\">\nAfter having an understand of the procedure for PDP and ICE plot, we can observe that:\n\nPDP is a global method, it takes into account all instances and makes a statement about the global relationship of a feature with the predicted outcome.\nOne of the main advantage of PDP is that it can be used to interpret the result of any \"black box\" learning methods.\nPDP can be quite computationally expensive when the data set becomes large.\nOwing to the limitations of computer graphics, and human perception, the size of the subsets $x_S$ must be small (l ≈ 1,2,3). There are of course a large number of such subsets, but only those chosen from among the usually much smaller set of highly relevant predictors are likely to be informative.\nPDP can obfuscate relationship that comes from interactions. PDPs show us how the average relationship between feature $x_S$ and $\\hat{y}$ looks like. This works well only in cases where the interactions between $x_S$ and the remaining features $x_C$ are weak. In cases where interactions do exist, the ICE plot may give a lot more insight of the underlying relationship.\n\nImplementation\nWe'll be using the titanic dataset (details of the dataset is listed in the link) to test our implementation.",
"# we download the training data and store it\n# under the `data` directory\ndata_dir = Path('data')\ndata_path = data_dir / 'train.csv'\ndata = pd.read_csv(data_path)\nprint('dimension: ', data.shape)\nprint('features: ', data.columns)\ndata.head()\n\n# some naive feature engineering\ndata['Age'] = data['Age'].fillna(data['Age'].median())\ndata['Embarked'] = data['Embarked'].fillna('S')\ndata['Sex'] = data['Sex'].apply(lambda x: 1 if x == 'male' else 0)\ndata = pd.get_dummies(data, columns = ['Embarked'])\n\n# features/columns that are used\nlabel = data['Survived']\nfeatures = [\n 'Pclass', 'Sex',\n 'Age', 'SibSp',\n 'Parch', 'Fare',\n 'Embarked_C', 'Embarked_Q', 'Embarked_S']\ndata = data[features]\n\nX_train, X_test, y_train, y_test = train_test_split(\n data, label, test_size = 0.2, random_state = 1234, stratify = label)\n\n# fit a baseline random forest model and show its top 2 most important features\nrf = RandomForestClassifier(n_estimators = 50, random_state = 1234)\nrf.fit(X_train, y_train)\n\nprint('top 2 important features:')\nimp_index = np.argsort(rf.feature_importances_)\nprint(features[imp_index[-1]])\nprint(features[imp_index[-2]])",
"Aforementioned, tree-based models lists out the top important features, but it is not clear whether they have a positive or negative impact on the result. This is where tools such as partial dependence plots can aid us communicate the results better to others.",
"from partial_dependence import PartialDependenceExplainer\nplt.rcParams['figure.figsize'] = 16, 9\n\n\n# we specify the feature name and its type to fit the partial dependence\n# result, after fitting the result, we can call .plot to visualize it\n# since this is a binary classification model, when we call the plot\n# method, we tell it which class are we targeting, in this case 1 means\n# the passenger did indeed survive (more on centered argument later)\npd_explainer = PartialDependenceExplainer(estimator = rf, verbose = 0)\npd_explainer.fit(data, feature_name = 'Sex', feature_type = 'cat')\npd_explainer.plot(centered = False, target_class = 1)\nplt.show()",
"Hopefully, we can agree that the partial dependence plot makes intuitive sense, as for the categorical feature Sex, 1 indicates that the passenger was a male. And we know that during the titanic accident, the majority of the survivors were female passenger, thus the plot is telling us male passengers will on average have around 40% chance lower of surviving when compared with female passengers. Also instead of only plotting the \"partial dependence\" plot, the plot also fills between the standard deviation range. This is essentially borrowing the idea from ICE plot that only plotting the average may obfuscate the relationship.\nCentered plot can be useful when we are not interested in seeing the absolute change of a predicted value, but rather the difference in prediction compared to a fixed point of the feature range.",
"# centered = True is actually the default\npd_explainer.plot(centered = True, target_class = 1)\nplt.show()",
"We can perform the same process for numerical features such as Fare. We know that more people from the upper class survived, and people from the upper class generally have to pay more Fare to get onboard the titanic. The partial dependence plot below also depicts this trend.",
"pd_explainer.fit(data, feature_name = 'Fare', feature_type = 'num')\npd_explainer.plot(target_class = 1)\nplt.show()",
"If you prefer to create your own visualization, you can call the results_ attribute to access the partial dependence result. And for those that are interested in the implementation details, the code can be obtained at the following link.\nWe'll conclude our discussion on parital dependence plot by providing a link to another blog that showcases this method's usefulness in ensuring the behavior of the new machine learning model does intuitively and logically match our intuition and does not differ significantly from a baseline model. Blog: Using Partial Dependence to Compare Sort Algorithms\nReference\n\nBlog: Introducing PDPbox\nOnline Book: Partial Dependence Plot (PDP)\nMathworks Documentation: plotPartialDependence\nGithub: PDPbox - python partial dependence plot toolbox"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
eshlykov/mipt-day-after-day | statistics/hw-13/hw-13.3.ipynb | unlicense | [
"Теоретическое домашнее задание 13\nЗадача 3\nИспользуя метод линейной регрессии, постройте приближение функции $f$ многочленом третьей степени по следующим данным:\n|$f$|3.9|5.0|5.7|6.5|7.1|7.6|7.8|8.1|8.4| \n|---|---|---|---|---|---|---|---|---|---|\n|$x$|4.0|5.2|6.1|7.0|7.9|8.6|8.9|9.5|9.9|",
"import numpy\nimport scipy\nfrom scipy.linalg import inv\nimport matplotlib.pyplot\n%matplotlib inline",
"Решение.\nЯсно, что нам нужна модель $y=\\theta_0 + \\theta_1 x + \\theta_2 x^2 + \\theta_3 x^3$.",
"n = 9 # Размер выборки\nk = 4 # Количество параметров",
"Рассмотрим отклик.",
"Y = numpy.array([3.9, 5.0, 5.7, 6.5, 7.1, 7.6, 7.8, 8.1, 8.4]).reshape(n, 1)\nprint(Y)",
"Рассмотрим регрессор.",
"x = numpy.array([4.0, 5.2, 6.1, 7.0, 7.9, 8.6, 8.9, 9.5, 9.9])\nX = numpy.ones((n, k))\nX[:, 1] = x\nX[:, 2] = x ** 2\nX[:, 3] = x ** 3\nprint(X)",
"Воспользуемся классической формулой для получения оценки.",
"Theta = inv(X.T @ X) @ X.T @ Y\nprint(Theta)",
"Построим график полученной функции и нанесем точки выборки.",
"x = numpy.linspace(3.5, 10.4, 1000)\ny = Theta[0] + x * Theta[1] + x ** 2 * Theta[2] + x ** 3 * Theta[3]\n\nmatplotlib.pyplot.figure(figsize=(20, 8))\nmatplotlib.pyplot.plot(x, y, color='turquoise', label='Предсказание', linewidth=2.5)\nmatplotlib.pyplot.scatter(X[:, 1], Y, s=40.0, label='Выборка', color='blue', alpha=0.5)\nmatplotlib.pyplot.legend()\nmatplotlib.pyplot.title('Функция $f(x)$')\nmatplotlib.pyplot.grid()\nmatplotlib.pyplot.show()",
"Вывод. Кубический многочлен, полученный методом линейной регресии, отлично приближает данную функцию. По графику видно, однако, что ее может хорошо приблизить и линейный многочлен.\n\n<font color=\"#808080\"> Странно, что в этом задании ничего больше не требуют, но что просили, то я и сделал. Даже график построил.</font>"
] | [
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
End of preview. Expand
in Dataset Viewer.
GitHub Jupyter Dataset
Dataset Description
This is a parsed and preprocessed version of GitHub-Jupyter Dataset, a dataset extracted from Jupyter Notebooks on BigQuery. We only keep markdown and python cells and convert the markdown to text. Some heuristics are also applied to filter notebooks with little data and very long or very short cells.
Licenses
Each example has the license of its associated repository. There are in total 15 licenses:
[
'mit',
'apache-2.0',
'gpl-3.0',
'gpl-2.0',
'bsd-3-clause',
'agpl-3.0',
'lgpl-3.0',
'lgpl-2.1',
'bsd-2-clause',
'cc0-1.0',
'epl-1.0',
'mpl-2.0',
'unlicense',
'isc',
'artistic-2.0'
]
- Downloads last month
- 95