markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
5.3) Predicting Comments for Specific Factor
comments_ = [] for i in df['views']: comments_.append(int(i * .01388))
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
6. Combining Factor + Error + Ratios
comments = np.array(df['comments']) error = [] for i in tqdm(range(st,end + 1 , 1)): # Creating Start and Ending Reage for Factors factor = i/100000 comments_ = [] for i in df['views']: # Predicting Likes for Specific Factor comments_.append(int(factor * i)) comments_ = np.array(comments_) total_error = [] for i in range(len(comments)): # Erros for Actual Like to Predicted Like for One Factor l = comments[i] - comments_[i] if (l >= 0): # Finding Modulo total_error.append(l) else: total_error.append(-l) total_error = np.array(total_error) error.append([factor, int(total_error.mean())]) # Finding Error for Specific Factor error = pd.DataFrame(error, columns = ['Factor','Error'])
100%|██████████████████████████████████████| 5416/5416 [00:07<00:00, 770.16it/s]
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Finding Best Factor that Fits the Likes and Views
final_factor = error.sort_values(by = 'Error').head(10)['Factor'].mean() final_factor comments_ = [] for i in df['views']: comments_.append(int(i * final_factor)) df['pred_comments'] = comments_ df.head()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Actual to Predicted Likes with best Fit Factor
data = [] for i in df.values: data.append([i[2],i[4],i[10]]) df_ = pd.DataFrame(data, columns = ['views','comments','pred_comments']) views = list(df_.sort_values(by = 'views')['views']) likes = list(df_.sort_values(by = 'views')['comments']) likes_ = list(df_.sort_values(by = 'views')['pred_comments']) fig, ax = plt.subplots(figsize = (15,4)) plt.plot(views,likes , label = 'Actual') plt.plot(views,likes_, label = 'Predicted') plt.legend() plt.xlabel('Views of the Video') plt.ylabel('Number of likes') plt.show()
_____no_output_____
Apache-2.0
Week -12/TED Talk - Correlation - Comments/Correlation - Comments.ipynb
AshishJangra27/Data-Science-Specialization
Multicollinearity and Regression AnalysisIn this tutorial, we will be using a spatial dataset of county-level election and demographic statistics for the United States. This time, we'll explore different methods to diagnose and account for multicollinearity in our data. Specifically, we'll calculate variance inflation factor (VIF), and compare parameter estimates and model fit in a multivariate regression predicting 2016 county voting preferences using an OLS model, a ridge regression, a lasso regression, and an elastic net regression.Objectives:* ***Calculate a variance inflation factor to diagnose multicollinearity.**** ***Use geographicall weighted regression to identify if the multicollinearity is scale dependent.**** ***Interpret model summary statistics.**** ***Describe how multicollinearity impacts stability in parameter esimates.**** ***Explain the variance/bias tradeoff and describe how to use it to improve models**** ***Draw a conclusion based on contrasting models.***Review: * [Dormann, C. et al. (2013). Collinearity: a review of methods to deal with it and a simulation study evaluating their performance. Ecography, 36(1), 27-46.](https://onlinelibrary.wiley.com/doi/full/10.1111/j.1600-0587.2012.07348.x)
import numpy as np import geopandas as gpd import pandas as pd import matplotlib.pyplot as plt import seaborn as sns import statsmodels.api as sm from statsmodels.stats.outliers_influence import variance_inflation_factor from sklearn.model_selection import cross_val_score from sklearn.model_selection import RepeatedKFold from sklearn.linear_model import Ridge from sklearn.linear_model import Lasso from sklearn.linear_model import ElasticNet from numpy import mean from numpy import std from numpy import absolute from libpysal.weights.contiguity import Queen import libpysal from statsmodels.api import OLS sns.set_style('white')
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
First, we're going to load the 'Elections' dataset from the libpysal library, which is a very easy to use API that accesses the Geodata Center at the University of Chicago.* More on spatial data science resources from UC: https://spatial.uchicago.edu/* A list of datasets available through lipysal: https://geodacenter.github.io/data-and-lab//
from libpysal.examples import load_example elections = load_example('Elections') #note the folder where your data now lives: #First, let's see what files are available in the 'Elections' data example elections.get_file_list()
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
When you are out in the world doing research, you often will not find a ready-made function to download your data. That's okay! You know how to get this dataset without using pysal! Do a quick internal review of online data formats and automatic data downloads. TASK 1: Use urllib functions to download this file directly from the internet to you H:/EnvDatSci folder (not your git repository). Extract the zipped file you've downloaded into a subfolder called H:/EnvDatSci/elections.
# Task 1 code here: #import required function: import urllib.request #define online filepath (aka url): url = "https://geodacenter.github.io/data-and-lab//data/election.zip" #define local filepath: local = '../../elections.zip' #download elections data: urllib.request.urlretrieve(url, local) #unzip file: see if google can help you figure this one out! import shutil shutil.unpack_archive(local, "../../../")
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 2: Use geopandas to read in this shapefile. Call your geopandas.DataFrame "votes"
# TASK 2: Use geopandas to read in this shapefile. Call your geopandas.DataFrame "votes" votes = gpd.read_file("H:\EnvDataSci\election/election.shp")
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
EXTRA CREDIT TASK (+2pts): use os to delete the elections data downloaded by pysal in your C: drive that you are no longer using.
# Extra credit task: #Let's view the shapefile to get a general idea of the geometry we're looking at: %matplotlib inline votes.plot() #View the first few line]s of the dataset votes.head() #Since there are too many columns for us to view on a signle page using "head", we can just print out the column names so we have them all listed for reference for col in votes.columns: print(col)
STATEFP COUNTYFP GEOID ALAND AWATER area_name state_abbr PST045214 PST040210 PST120214 POP010210 AGE135214 AGE295214 AGE775214 SEX255214 RHI125214 RHI225214 RHI325214 RHI425214 RHI525214 RHI625214 RHI725214 RHI825214 POP715213 POP645213 POP815213 EDU635213 EDU685213 VET605213 LFE305213 HSG010214 HSG445213 HSG096213 HSG495213 HSD410213 HSD310213 INC910213 INC110213 PVY020213 BZA010213 BZA110213 BZA115213 NES010213 SBO001207 SBO315207 SBO115207 SBO215207 SBO515207 SBO415207 SBO015207 MAN450207 WTN220207 RTN130207 RTN131207 AFN120207 BPS030214 LND110210 POP060210 Demvotes16 GOPvotes16 total_2016 pct_dem_16 pct_gop_16 diff_2016 pct_pt_16 total_2012 Demvotes12 GOPvotes12 county_fip state_fips pct_dem_12 pct_gop_12 diff_2012 pct_pt_12 geometry
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
You can use pandas summary statistics to get an idea of how county-level data varies across the United States. TASK 3: For example, how did the county mean percent Democratic vote change between 2012 (pct_dem_12) and 2016 (pct_dem_16)?Look here for more info on pandas summary statistics:https://www.earthdatascience.org/courses/intro-to-earth-data-science/scientific-data-structures-python/pandas-dataframes/run-calculations-summary-statistics-pandas-dataframes/
#Task 3 demchange = votes["pct_dem_16"].mean() - votes["pct_dem_12"].mean() print("The mean percent Democrative vote changed by ", demchange, "between 2012 and 2016.")
The mean percent Democrative vote changed by -0.06783446699806961 between 2012 and 2016.
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
We can also plot histograms of the data. Below, smoothed histograms from the seaborn package (imported as sns) let us get an idea of the distribution of percent democratic votes in 2012 (left) and 2016 (right).
# Plot histograms: f,ax = plt.subplots(1,2, figsize=(2*3*1.6, 2)) for i,col in enumerate(['pct_dem_12','pct_dem_16']): sns.kdeplot(votes[col].values, shade=True, color='slategrey', ax=ax[i]) ax[i].set_title(col.split('_')[1]) # Plot spatial distribution of # dem vote in 2012 and 2016 with histogram. f,ax = plt.subplots(2,2, figsize=(1.6*6 + 1,2.4*3), gridspec_kw=dict(width_ratios=(6,1))) for i,col in enumerate(['pct_dem_12','pct_dem_16']): votes.plot(col, linewidth=.05, cmap='RdBu', ax=ax[i,0]) ax[i,0].set_title(['2012','2016'][i] + "% democratic vote") ax[i,0].set_xticklabels('') ax[i,0].set_yticklabels('') sns.kdeplot(votes[col].values, ax=ax[i,1], vertical=True, shade=True, color='slategrey') ax[i,1].set_xticklabels('') ax[i,1].set_ylim(-1,1) f.tight_layout() plt.show()
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 4: Make a new column on your geopandas dataframe called "pct_dem_change" and plot it using the syntax above. Explain the plot.
# Task 4: add new column pct_dem_change to votes: votes["pct_dem_change"] = votes.pct_dem_16 - votes.pct_dem_12 f, ax = plt plt.show(votes.pct_dem_change) #Task 4: plot your pct_dem_change variable on a map:
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Click on this url to learn more about the variables in this dataset: https://geodacenter.github.io/data-and-lab//county_election_2012_2016-variables/As you can see, there are a lot of data values available in this dataset. Let's say we want to learn more about what county-level factors influence percent change in democratic vote between (pct_dem_change).Looking at the data description on the link above, you see that this is an exceptionally large dataset with many variables. During lecture, we discussed how there are two types of multicollinearity in our data:* *Intrinsic multicollinearity:* is an artifact of how we make observations. Often our measurements serve as proxies for some latent process (for example, we can measure percent silt, percent sand, and percent clay as proxies for the latent variable of soil texture). There will be slight variability in the information content between each proxy measurement, but they will not be independent of one another.* *Incidental collinearity:* is an artifact of how we sample complex populations. If we collect data from a subsample of the landscape where we don't see all combinations of our predictor variables (do not have good cross replication across our variables). We often induce collinearity in our data just because we are limitted in our ability to sample the environment at the scale of temporal/spatial variability of our process of interest. Incidental collinearity is a model formulation problem.(See here for more info on how to avoid it: https://people.umass.edu/sdestef/NRC%20601/StudyDesignConcepts.pdf) TASK 5: Looking at the data description, pick two variables that you believe will be intrinsically multicollinear. List and describe these variables. Why do you think they will be collinear? Is this an example of *intrinsic* or *incidental* collinearity?*Click on this box to enter text*I chose: * "RHI125214", White alone, percent, 2014* "RHI225214", Black or African American alone, percent, 2014These variables are intrinsically multicollinear. A decrease in one of a finite number of races implicitly signifies an increase in another race. Multivariate regression in observational data:Our next step is to formulate our predictive/diagnostic model. We want to create a subset of the "votes" geopandas data frame that contains ten predictor variables and our response variable (pct_pt_16) two variables you selected under TASK 1. First, create a list of the variables you'd like to select. TASK 6: Create a subset of votes called "my_list" containing only your selected predictor variables. Make sure you use the two variables selected under TASK 3, and eight additional variables
# Task 4: create a subset of votes called "my list" with all your subset variables. #my_list = ["pct_pt_16", <list your variables here>] #check to make sure all your columns are there: votes[my_list].head()
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Scatterplot matrixWe call the process of getting to know your data (ranges and distributions of the data, as well as any relationships between variables) "exploratory data analysis". Pairwise plots of your variables, called scatterplots, can provide a lot of insight into the type of relationships you have between variables. A scatterplot matrix is a pairwise comparison of all variables in your dataset.
#Use seaborn.pairplot to plot a scatterplot matrix of you 10 variable subset: sns.pairplot(votes[my_list])
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 7: Do you observe any collinearity in this dataset? How would you describe the relationship between your two "incidentally collinear" variables that you selected based on looking at variable descriptions? *Type answer here* TASK 8: What is plotted on the diagonal panels of the scatterplot matrix?*Type answer here* Diagnosing collinearity globally:During class, we discussed the Variance Inflation Factor, which describes the magnitude of variance inflation that can be expected in an OLS parameter estimate for a given variable *given pairwise collinearity between that variable and another variable*.
#VIF = 1/(1-R2) of a pairwise OLS regression between two predictor variables #We can use a built-in function "variance_inflation_factor" from statsmodel.api to calculate VIF #Learn more about the function ?variance_inflation_factor #Calculate VIFs on our dataset vif = pd.DataFrame() vif["VIF Factor"] = [variance_inflation_factor(votes[my_list[1:10]].values, i) for i in range(votes[my_list[1:10]].shape[1])] vif["features"] = votes[my_list[1:10]].columns vif.round()
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Collinearity is always present in observational data. When is it a problem?Generally speaking, VIF > 10 are considered "too much" collinearity. But this value is somewhat arbitrary: the extent to which variance inflation will impact your analysis is highly context dependent. There are two primary contexts where variance inflation is problematic: 1\. **You are using your analysis to evaluate variable importance:** If you are using parameter estimates from your model to diagnose which observations have physically important relationships with your response variable, variance inflation can make an important predictor look unimportant, and parameter estimates will be highly leveraged by small changes in the data. 2\. **You want to use your model to make predictions in a situation where the specific structure of collinearity between variables may have shifted:** When training a model on collinear data, the model only applies to data with that exact structure of collinearity. Caluculate a linear regression on the global data:In this next step, we're going to calculate a linear regression in our data an determine whether there is a statistically significant relationship between per capita income and percent change in democratic vote.
#first, forumalate the model. See weather_trend.py in "Git_101" for a refresher on how. #extract variable that you want to use to "predict" X = np.array(votes[my_list[1:10]].values) #standardize data to assist in interpretation of coefficients X = (X - np.mean(X, axis=0)) / np.std(X, axis=0) #extract variable that we want to "predict" Y = np.array(votes['pct_dem_change'].values) #standardize data to assist in interpretation of coefficients Y = (Y - np.mean(X)) / np.std(Y) lm = OLS(Y,X) lm_results = OLS(Y,X).fit().summary() print(lm_results)
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 9: Answer: which coefficients indicate a statisticall significant relationship between parameter and pct_dem_change? What is your most important predictor variable? How can you tell?*Type answer here* TASK10: Are any of these parameters subject to variance inflation? How can you tell?*Type answer here* Now, let's plot our residuals to see if there are any spatial patterns in them.Remember residuals = predicted - fitted values
#Add model residuals to our "votes" geopandas dataframe: votes['lm_resid']=OLS(Y,X).fit().resid sns.kdeplot(votes['lm_resid'].values, shade=True, color='slategrey')
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 11: Are our residuals normally distributed with a mean of zero? What does that mean?*Type answer here* Penalized regression: ridge penaltyIn penalized regression, we intentionally bias the parameter estimates to stabilize them given collinearity in the dataset.From https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/"As mentioned before, ridge regression performs ‘L2 regularization‘, i.e. it adds a factor of sum of squares of coefficients in the optimization objective. Thus, ridge regression optimizes the following:**Objective = RSS + α * (sum of square of coefficients)**Here, α (alpha) is the parameter which balances the amount of emphasis given to minimizing RSS vs minimizing sum of square of coefficients. α can take various values:* **α = 0:** The objective becomes same as simple linear regression. We’ll get the same coefficients as simple linear regression.* **α = ∞:** The coefficients will approach zero. Why? Because of infinite weightage on square of coefficients, anything less than zero will make the objective infinite.* **0 < α < ∞:** The magnitude of α will decide the weightage given to different parts of objective. The coefficients will be somewhere between 0 and ones for simple linear regression."In other words, the ridge penalty shrinks coefficients such that collinear coefficients will have more similar coefficient values. It has a "grouping" tendency.
# when L2=0, Ridge equals OLS model = Ridge(alpha=1) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) #force scores to be positive scores = absolute(scores) print('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores))) model.fit(X,Y) #Print out the model coefficients print(model.coef_)
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Penalized regression: lasso penaltyFrom https://www.analyticsvidhya.com/blog/2016/01/ridge-lasso-regression-python-complete-tutorial/"LASSO stands for Least Absolute Shrinkage and Selection Operator. I know it doesn’t give much of an idea but there are 2 key words here – ‘absolute‘ and ‘selection‘.Lets consider the former first and worry about the latter later.Lasso regression performs L1 regularization, i.e. it adds a factor of sum of absolute value of coefficients in the optimization objective. Thus, lasso regression optimizes the following:**Objective = RSS + α * (sum of absolute value of coefficients)**Here, α (alpha) works similar to that of ridge and provides a trade-off between balancing RSS and magnitude of coefficients. Like that of ridge, α can take various values. Lets iterate it here briefly:* **α = 0:** Same coefficients as simple linear regression* **α = ∞:** All coefficients zero (same logic as before)* **0 < α < ∞:** coefficients between 0 and that of simple linear regressionYes its appearing to be very similar to Ridge till now. But just hang on with me and you’ll know the difference by the time we finish."In other words, the lasso penalty shrinks unimportant coefficients down towards zero, automatically "selecting" important predictor variables. But what if that shrunken coefficient is induced by incidental collinearity (i.e. is a feature of how we sampled our data)?
# when L1=0, Lasso equals OLS model = Lasso(alpha=0) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) #force scores to be positive scores = absolute(scores) print('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores))) model.fit(X,Y) #Print out the model coefficients print(model.coef_) #How do these compare with OLS coefficients above? # when L1 approaches infinity, certain coefficients will become exactly zero, and MAE equals the variance of our response variable: model = Lasso(alpha=10000000) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) #force scores to be positive scores = absolute(scores) print('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores))) model.fit(X,Y) #Print out the model coefficients print(model.coef_) #How do these compare with OLS coefficients above?
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Penalized regression: elastic net penaltyIn other words, the lasso penalty shrinks unimportant coefficients down towards zero, automatically "selecting" important predictor variables. The ridge penalty shrinks coefficients of collinear predictor variables nearer to each other, effectively partitioning the magnitude of response from the response variable between them, instead of "arbitrarily" partitioning it to one group.We can also run a regression with a linear combination of ridge and lasso, called the elastic net, that has a cool property called "group selection."The ridge penalty still works to distribute response variance equally between members of "groups" of collinear predictor variables. The lasso penalty still works to shrink certain coefficients to exactly zero so they can be ignored in model formulation. The elastic net produces models that are both sparse and stable under collinearity, by shrinking parameters of members of unimportant collinear predictor variables to exactly zero:
# when L1 approaches infinity, certain coefficients will become exactly zero, and MAE equals the variance of our response variable: model = ElasticNet(alpha=1, l1_ratio=0.2) # define model evaluation method cv = RepeatedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_val_score(model, X, Y, scoring='neg_mean_absolute_error', cv=cv, n_jobs=-1) #force scores to be positive scores = absolute(scores) print('Mean MAE: %.3f (%.3f)' % (mean(scores), std(scores))) model.fit(X,Y) #Print out the model coefficients print(model.coef_) #How do these compare with OLS coefficients above?
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
TASK 11: Match these elastic net coefficients up with your original data. Do you see a logical grouping(s) between variables that have non-zero coefficients?Explain why or why not.*Type answer here*
# Task 11 scratch cell:
_____no_output_____
MIT
CodeSprints/multicollinearity_methods.ipynb
jjessamy/EnvDatSci2021
Testinnsening av person skattemelding med næringspesifikasjon Denne demoen er ment for å vise hvordan flyten for et sluttbrukersystem kan hente et utkast, gjøre endringer, validere/kontrollere det mot Skatteetatens apier, for å sende det inn via Altinn3
try: from altinn3 import * from skatteetaten_api import main_relay, base64_decode_response, decode_dokument import requests import base64 import xmltodict import xml.dom.minidom from pathlib import Path except ImportError as e: print("Mangler en eller avhengighet, installer dem via pip, se requierments.txt fil for detaljer") raise ImportError(e) #hjelpe metode om du vil se en request printet som curl def print_request_as_curl(r): command = "curl -X {method} -H {headers} -d '{data}' '{uri}'" method = r.request.method uri = r.request.url data = r.request.body headers = ['"{0}: {1}"'.format(k, v) for k, v in r.request.headers.items()] headers = " -H ".join(headers) print(command.format(method=method, headers=headers, data=data, uri=uri))
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Generer ID-porten tokenTokenet er gyldig i 300 sekunder, rekjørt denne biten om du ikke har kommet frem til Altinn3 biten før 300 sekunder
idporten_header = main_relay()
https://oidc-ver2.difi.no/idporten-oidc-provider/authorize?scope=skatteetaten%3Aformueinntekt%2Fskattemelding%20openid&acr_values=Level3&client_id=8d7adad7-b497-40d0-8897-9a9d86c95306&redirect_uri=http%3A%2F%2Flocalhost%3A12345%2Ftoken&response_type=code&state=5lCEToPZskoHXWGs-ghf4g&nonce=1638258045740949&resource=https%3A%2F%2Fmp-test.sits.no%2Fapi%2Feksterntapi%2Fformueinntekt%2Fskattemelding%2F&code_challenge=gnh30mujVP4US-TgTN7nvsGjRU9MCWYwqZ_xolRt6zI=&code_challenge_method=S256&ui_locales=nb Authorization token received {'code': ['TBNZZzWsfhY2LgB3mk8nbvUR8KmXhngSQ5HeuDeW9NI'], 'state': ['5lCEToPZskoHXWGs-ghf4g']} JS : {'access_token': 'eyJraWQiOiJjWmswME1rbTVIQzRnN3Z0NmNwUDVGSFpMS0pzdzhmQkFJdUZiUzRSVEQ0IiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJXQTdMRE51djZiLUNpZkk0aFNtTWRmQ2dubmxSNmRLQVJvU0Q4Vkh6WGEwPSIsImlzcyI6Imh0dHBzOlwvXC9vaWRjLXZlcjIuZGlmaS5ub1wvaWRwb3J0ZW4tb2lkYy1wcm92aWRlclwvIiwiY2xpZW50X2FtciI6Im5vbmUiLCJwaWQiOiIyOTExNDUwMTMxOCIsInRva2VuX3R5cGUiOiJCZWFyZXIiLCJjbGllbnRfaWQiOiI4ZDdhZGFkNy1iNDk3LTQwZDAtODg5Ny05YTlkODZjOTUzMDYiLCJhdWQiOiJodHRwczpcL1wvbXAtdGVzdC5zaXRzLm5vXC9hcGlcL2Vrc3Rlcm50YXBpXC9mb3JtdWVpbm50ZWt0XC9za2F0dGVtZWxkaW5nXC8iLCJhY3IiOiJMZXZlbDMiLCJzY29wZSI6Im9wZW5pZCBza2F0dGVldGF0ZW46Zm9ybXVlaW5udGVrdFwvc2thdHRlbWVsZGluZyIsImV4cCI6MTYzODM0NDQ1NSwiaWF0IjoxNjM4MjU4MDU2LCJjbGllbnRfb3Jnbm8iOiI5NzQ3NjEwNzYiLCJqdGkiOiJFWVNfYVZNWU5KcUlEYmRVNG4xWjZqWmdVZ0dWLTBCc2E5TGdQNGtxOEtNIiwiY29uc3VtZXIiOnsiYXV0aG9yaXR5IjoiaXNvNjUyMy1hY3RvcmlkLXVwaXMiLCJJRCI6IjAxOTI6OTc0NzYxMDc2In19.rx_TeF6Xv3rwJwCy7DTfhmJ25UiLAQqo06qIXQqw00cg8FZhsNT1GtP40kHhGNrtXg2WfpgBSNNlnew64j9iHyEO1LlZous2GazVU0vjfJT-kWKbos2nhOaxWf0zZStvOwp4WXA9nyta6RwIF4brMa9aFmhWC0019FJPxOKFg8K7D0wHOAZtc5QLd7iL6Hysx35n4MjPEIe0uIQNP7PSRlnbTTxXOmwRJsVems0qgvcik-T3o_mkG7FCbjUCd4B22NB87fSC8HFV63lzseVZ7odldwFvJWsOMqoJEBtsVJVzcl2NeCkxJv0mXXvaOLpBbpnE9Fg8Cysd0SeXyLDkLg', 'id_token': 'eyJraWQiOiJjWmswME1rbTVIQzRnN3Z0NmNwUDVGSFpMS0pzdzhmQkFJdUZiUzRSVEQ0IiwiYWxnIjoiUlMyNTYifQ.eyJhdF9oYXNoIjoiaHNOWVRsRTBhM0JEVEdjSGRQSXBmZyIsInN1YiI6IldBN0xETnV2NmItQ2lmSTRoU21NZGZDZ25ubFI2ZEtBUm9TRDhWSHpYYTA9IiwiYW1yIjpbIk1pbmlkLVBJTiJdLCJpc3MiOiJodHRwczpcL1wvb2lkYy12ZXIyLmRpZmkubm9cL2lkcG9ydGVuLW9pZGMtcHJvdmlkZXJcLyIsInBpZCI6IjI5MTE0NTAxMzE4IiwibG9jYWxlIjoibmIiLCJub25jZSI6IjE2MzgyNTgwNDU3NDA5NDkiLCJzaWQiOiIyN1ZQSUp3cXZrZHlvc0ZBZ0tYMGZsUk9CdHZRTFFFOFRxQl9HZlNfMlhzIiwiYXVkIjoiOGQ3YWRhZDctYjQ5Ny00MGQwLTg4OTctOWE5ZDg2Yzk1MzA2IiwiYWNyIjoiTGV2ZWwzIiwiYXV0aF90aW1lIjoxNjM4MjU4MDU1LCJleHAiOjE2MzgyNTgxNzYsImlhdCI6MTYzODI1ODA1NiwianRpIjoiRXpZWVJhTmRmZm5SeVNzNFVfdE9UbVZsTFRvSURZemlXTS1zVkFMclNmYyJ9.nuNYzanJrliYENhag64WsAe-m5DvZ1uKszCj8akRck-_-FxNH59IwamK6cRP4TcGTM3a5nung4paWkNvfoQOWQajbU51tqffMJzG53qyMDwWTETo7_YotTS4TkhM8aNGdZykch6K5toADEDZzp3IHXXL5-ZAZ8nmcpJOP4tgvACYVATcFK8bbvJ79IPIUKuk_lBiNOckj0PyFpAkIuqjhFAFTsYqcKbpD6_w0RSHUty1cQ4pvsQIXhsli6phpBbefrx3Wm2ArXNRV9eBBS1NaBSnCtVs6ze3fRJs_pKsbFEgIpuxrDK0ICAZROONGDx8631G7_co4iedrNCYD11rfg', 'token_type': 'Bearer', 'expires_in': 86399, 'scope': 'openid skatteetaten:formueinntekt/skattemelding'} The token is good, expires in 86399 seconds Bearer eyJraWQiOiJjWmswME1rbTVIQzRnN3Z0NmNwUDVGSFpMS0pzdzhmQkFJdUZiUzRSVEQ0IiwiYWxnIjoiUlMyNTYifQ.eyJzdWIiOiJXQTdMRE51djZiLUNpZkk0aFNtTWRmQ2dubmxSNmRLQVJvU0Q4Vkh6WGEwPSIsImlzcyI6Imh0dHBzOlwvXC9vaWRjLXZlcjIuZGlmaS5ub1wvaWRwb3J0ZW4tb2lkYy1wcm92aWRlclwvIiwiY2xpZW50X2FtciI6Im5vbmUiLCJwaWQiOiIyOTExNDUwMTMxOCIsInRva2VuX3R5cGUiOiJCZWFyZXIiLCJjbGllbnRfaWQiOiI4ZDdhZGFkNy1iNDk3LTQwZDAtODg5Ny05YTlkODZjOTUzMDYiLCJhdWQiOiJodHRwczpcL1wvbXAtdGVzdC5zaXRzLm5vXC9hcGlcL2Vrc3Rlcm50YXBpXC9mb3JtdWVpbm50ZWt0XC9za2F0dGVtZWxkaW5nXC8iLCJhY3IiOiJMZXZlbDMiLCJzY29wZSI6Im9wZW5pZCBza2F0dGVldGF0ZW46Zm9ybXVlaW5udGVrdFwvc2thdHRlbWVsZGluZyIsImV4cCI6MTYzODM0NDQ1NSwiaWF0IjoxNjM4MjU4MDU2LCJjbGllbnRfb3Jnbm8iOiI5NzQ3NjEwNzYiLCJqdGkiOiJFWVNfYVZNWU5KcUlEYmRVNG4xWjZqWmdVZ0dWLTBCc2E5TGdQNGtxOEtNIiwiY29uc3VtZXIiOnsiYXV0aG9yaXR5IjoiaXNvNjUyMy1hY3RvcmlkLXVwaXMiLCJJRCI6IjAxOTI6OTc0NzYxMDc2In19.rx_TeF6Xv3rwJwCy7DTfhmJ25UiLAQqo06qIXQqw00cg8FZhsNT1GtP40kHhGNrtXg2WfpgBSNNlnew64j9iHyEO1LlZous2GazVU0vjfJT-kWKbos2nhOaxWf0zZStvOwp4WXA9nyta6RwIF4brMa9aFmhWC0019FJPxOKFg8K7D0wHOAZtc5QLd7iL6Hysx35n4MjPEIe0uIQNP7PSRlnbTTxXOmwRJsVems0qgvcik-T3o_mkG7FCbjUCd4B22NB87fSC8HFV63lzseVZ7odldwFvJWsOMqoJEBtsVJVzcl2NeCkxJv0mXXvaOLpBbpnE9Fg8Cysd0SeXyLDkLg
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Hent utkast og gjeldendeHer legger vi inn fødselsnummeret vi logget oss inn med, Dersom du velger et annet fødselsnummer så må den du logget på med ha tilgang til skattemeldingen du ønsker å hente Parten nedenfor er brukt for internt test, pass på bruk deres egne testparter når dere tester01014700230 har fått en myndighetsfastsettingLegg merke til `/api/skattemelding/v2/` biten av url'n er ny for 2021
s = requests.Session() s.headers = dict(idporten_header) fnr="29114501318" #oppdater med test fødselsnummerene du har fått tildelt
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Utkast
url_utkast = f'https://mp-test.sits.no/api/skattemelding/v2/utkast/2021/{fnr}' r = s.get(url_utkast) r print(r.text)
<skattemeldingOgNaeringsspesifikasjonforespoerselResponse xmlns="no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:forespoersel:response:v2"><dokumenter><skattemeldingdokument><id>SKI:138:41694</id><encoding>utf-8</encoding><content>PD94bWwgdmVyc2lvbj0iMS4wIiBlbmNvZGluZz0iVVRGLTgiPz48c2thdHRlbWVsZGluZyB4bWxucz0idXJuOm5vOnNrYXR0ZWV0YXRlbjpmYXN0c2V0dGluZzpmb3JtdWVpbm50ZWt0OnNrYXR0ZW1lbGRpbmc6ZWtzdGVybjp2OSI+PHBhcnRzcmVmZXJhbnNlPjIyMjU3NjY2PC9wYXJ0c3JlZmVyYW5zZT48aW5udGVrdHNhYXI+MjAyMTwvaW5udGVrdHNhYXI+PGJhbmtMYWFuT2dGb3JzaWtyaW5nPjxrb250bz48aWQ+NTg0OGRjYjE1Y2I1YzkyMGNiMWFhMDc0Yzg2NjA5OWZlNTg2MTY0YjwvaWQ+PGJhbmtlbnNOYXZuPjx0ZWtzdD5TT0ZJRU1ZUiBPRyBCUkVWSUsgUkVWSVNKT048L3Rla3N0PjwvYmFua2Vuc05hdm4+PG9yZ2FuaXNhc2pvbnNudW1tZXI+PG9yZ2FuaXNhc2pvbnNudW1tZXI+OTEwOTMxNDE1PC9vcmdhbmlzYXNqb25zbnVtbWVyPjwvb3JnYW5pc2Fzam9uc251bW1lcj48a29udG9udW1tZXI+PHRla3N0Pjg4MDg4MTY1MTIyPC90ZWtzdD48L2tvbnRvbnVtbWVyPjxpbm5za3VkZD48YmVsb2VwPjxiZWxvZXBJTm9rPjxiZWxvZXBTb21IZWx0YWxsPjY5NTcwMTwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcElOb2s+PGJlbG9lcElWYWx1dGE+PGJlbG9lcD42OTU3MDE8L2JlbG9lcD48L2JlbG9lcElWYWx1dGE+PHZhbHV0YWtvZGU+PHZhbHV0YWtvZGU+Tk9LPC92YWx1dGFrb2RlPjwvdmFsdXRha29kZT48dmFsdXRha3Vycz48dmFsdXRha3Vycz4xPC92YWx1dGFrdXJzPjwvdmFsdXRha3Vycz48L2JlbG9lcD48L2lubnNrdWRkPjxvcHB0amVudGVSZW50ZXI+PGJlbG9lcD48YmVsb2VwSU5vaz48YmVsb2VwU29tSGVsdGFsbD45Njk2PC9iZWxvZXBTb21IZWx0YWxsPjwvYmVsb2VwSU5vaz48YmVsb2VwSVZhbHV0YT48YmVsb2VwPjk2OTY8L2JlbG9lcD48L2JlbG9lcElWYWx1dGE+PHZhbHV0YWtvZGU+PHZhbHV0YWtvZGU+Tk9LPC92YWx1dGFrb2RlPjwvdmFsdXRha29kZT48dmFsdXRha3Vycz48dmFsdXRha3Vycz4xPC92YWx1dGFrdXJzPjwvdmFsdXRha3Vycz48L2JlbG9lcD48L29wcHRqZW50ZVJlbnRlcj48L2tvbnRvPjwvYmFua0xhYW5PZ0ZvcnNpa3Jpbmc+PGFyYmVpZFRyeWdkT2dQZW5zam9uPjxsb2Vubk9nVGlsc3ZhcmVuZGVZdGVsc2VyPjxhcmJlaWRzZ2l2ZXI+PGlkPjAwZWU3MWU1YjFkMTRmYWVjZmMxNzM1Y2ExMTBkYjdjMjcwMTdkN2E8L2lkPjxuYXZuPjxvcmdhbmlzYXNqb25zbmF2bj5UUkVOR0VSRUlEIE9HIEFTSyBSRVZJU0pPTjwvb3JnYW5pc2Fzam9uc25hdm4+PC9uYXZuPjxzYW1sZWRlWXRlbHNlckZyYUFyYmVpZHNnaXZlclBlckJlaGFuZGxpbmdzYXJ0PjxpZD44Y2E5MzJlM2MwMTBkOTdhNmVmMmU1YzhkYmVlZmMyOTIzOWRiZDQ0PC9pZD48YmVsb2VwPjxiZWxvZXA+PGJlbG9lcElOb2s+PGJlbG9lcFNvbUhlbHRhbGw+NTMzNDQ4PC9iZWxvZXBTb21IZWx0YWxsPjwvYmVsb2VwSU5vaz48YmVsb2VwSVZhbHV0YT48YmVsb2VwPjUzMzQ0ODwvYmVsb2VwPjwvYmVsb2VwSVZhbHV0YT48dmFsdXRha29kZT48dmFsdXRha29kZT5OT0s8L3ZhbHV0YWtvZGU+PC92YWx1dGFrb2RlPjx2YWx1dGFrdXJzPjx2YWx1dGFrdXJzPjE8L3ZhbHV0YWt1cnM+PC92YWx1dGFrdXJzPjwvYmVsb2VwPjwvYmVsb2VwPjxiZWhhbmRsaW5nc2FydD48dGVrc3Q+TE9OTjwvdGVrc3Q+PC9iZWhhbmRsaW5nc2FydD48L3NhbWxlZGVZdGVsc2VyRnJhQXJiZWlkc2dpdmVyUGVyQmVoYW5kbGluZ3NhcnQ+PG9yZ2FuaXNhc2pvbnNudW1tZXI+PG9yZ2FuaXNhc2pvbnNudW1tZXI+OTEwOTE5NjYwPC9vcmdhbmlzYXNqb25zbnVtbWVyPjwvb3JnYW5pc2Fzam9uc251bW1lcj48L2FyYmVpZHNnaXZlcj48L2xvZW5uT2dUaWxzdmFyZW5kZVl0ZWxzZXI+PG1pbnN0ZWZyYWRyYWdPZ0tvc3RuYWRlcj48aWQ+TUlOU1RFRlJBRFJBR19PR19LT1NUTkFERVJfS05ZVFRFVF9USUxfQVJCRUlEX09HX0FOTkVOX0lOTlRFS1Q8L2lkPjxtaW5zdGVmcmFkcmFnSUlubnRla3Q+PGZyYWRyYWdzYmVyZXR0aWdldEJlbG9lcD48YmVsb2VwPjxiZWxvZXBTb21IZWx0YWxsPjEwNjc1MDwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcD48L2ZyYWRyYWdzYmVyZXR0aWdldEJlbG9lcD48YmVsb2VwVXRlbkhlbnN5blRpbFZhbGd0UHJpb3JpdGVydEZyYWRyYWdzdHlwZT48YmVsb2VwPjxiZWxvZXBTb21IZWx0YWxsPjEwNjc1MDwvYmVsb2VwU29tSGVsdGFsbD48L2JlbG9lcD48L2JlbG9lcFV0ZW5IZW5zeW5UaWxWYWxndFByaW9yaXRlcnRGcmFkcmFnc3R5cGU+PC9taW5zdGVmcmFkcmFnSUlubnRla3Q+PC9taW5zdGVmcmFkcmFnT2dLb3N0bmFkZXI+PC9hcmJlaWRUcnlnZE9nUGVuc2pvbj48c2thdHRlbWVsZGluZ09wcHJldHRldD48YnJ1a2VyaWRlbnRpZmlrYXRvcj5pa2tlLWltcGxlbWVudGVydDwvYnJ1a2VyaWRlbnRpZmlrYXRvcj48YnJ1a2VyaWRlbnRpZmlrYXRvcnR5cGU+c3lzdGVtaWRlbnRpZmlrYXRvcjwvYnJ1a2VyaWRlbnRpZmlrYXRvcnR5cGU+PG9wcHJldHRldERhdG8+MjAyMS0xMS0zMFQwNzozNzoxNi4zOTE4MjhaPC9vcHByZXR0ZXREYXRvPjwvc2thdHRlbWVsZGluZ09wcHJldHRldD48L3NrYXR0ZW1lbGRpbmc+</content><type>skattemeldingPersonligUtkast</type></skattemeldingdokument></dokumenter></skattemeldingOgNaeringsspesifikasjonforespoerselResponse>
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Gjeldende
url_gjeldende = f'https://mp-test.sits.no/api/skattemelding/v2/2021/{fnr}' r_gjeldende = s.get(url_gjeldende) r_gjeldende
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
FastsattHer får en _http 404_ om vedkommende ikke har noen fastsetting, rekjørt denne etter du har sendt inn og fått tilbakemdling i Altinn at den har blitt behandlet, du skal nå ha en fastsatt skattemelding om den har blitt sent inn som Komplett
url_fastsatt = f'https://mp-test.sits.no/api/skattemelding/v2/fastsatt/2021/{fnr}' r_fastsatt = s.get(url_fastsatt) r_fastsatt
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Svar fra hent gjeldende Gjeldende dokument referanse: I responsen på alle api kallene, være seg utkast/fastsatt eller gjeldene, så følger det med en dokumentreferanse. For å kalle valider tjenesten, er en avhengig av å bruke korrekt referanse til gjeldende skattemelding. Cellen nedenfor henter ut gjeldende dokumentrefranse printer ut responsen fra hent gjeldende kallet
sjekk_svar = r_gjeldende sme_og_naering_respons = xmltodict.parse(sjekk_svar.text) skattemelding_base64 = sme_og_naering_respons["skattemeldingOgNaeringsspesifikasjonforespoerselResponse"]["dokumenter"]["skattemeldingdokument"] sme_base64 = skattemelding_base64["content"] dokref = sme_og_naering_respons["skattemeldingOgNaeringsspesifikasjonforespoerselResponse"]["dokumenter"]['skattemeldingdokument']['id'] decoded_sme_xml = decode_dokument(skattemelding_base64) sme_utkast = xml.dom.minidom.parseString(decoded_sme_xml["content"]).toprettyxml() print(f"Responsen fra hent gjeldende ser slik ut, gjeldende dokumentrerefanse er {dokref}\n") print(xml.dom.minidom.parseString(sjekk_svar.text).toprettyxml()) with open("../../../src/resources/eksempler/v2/Naeringspesifikasjon-enk-v2_etterBeregning.xml", 'r') as f: naering_enk_xml = f.read() innsendingstype = "ikkeKomplett" naeringsspesifikasjoner_enk_b64 = base64.b64encode(naering_enk_xml.encode("utf-8")) naeringsspesifikasjoner_enk_b64 = str(naeringsspesifikasjoner_enk_b64.decode("utf-8")) skattemeldingPersonligSkattepliktig_base64=sme_base64 #bruker utkastet uten noen endringer naeringsspesifikasjoner_base64=naeringsspesifikasjoner_enk_b64 dok_ref=dokref valider_konvlutt_v2 = """ <?xml version="1.0" encoding="utf-8" ?> <skattemeldingOgNaeringsspesifikasjonRequest xmlns="no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:request:v2"> <dokumenter> <dokument> <type>skattemeldingPersonlig</type> <encoding>utf-8</encoding> <content>{sme_base64}</content> </dokument> <dokument> <type>naeringsspesifikasjon</type> <encoding>utf-8</encoding> <content>{naeringsspeifikasjon_base64}</content> </dokument> </dokumenter> <dokumentreferanseTilGjeldendeDokument> <dokumenttype>skattemeldingPersonlig</dokumenttype> <dokumentidentifikator>{dok_ref}</dokumentidentifikator> </dokumentreferanseTilGjeldendeDokument> <inntektsaar>2021</inntektsaar> <innsendingsinformasjon> <innsendingstype>{innsendingstype}</innsendingstype> <opprettetAv>TurboSkatt</opprettetAv> </innsendingsinformasjon> </skattemeldingOgNaeringsspesifikasjonRequest> """.replace("\n","") naering_enk = valider_konvlutt_v2.format( sme_base64=skattemeldingPersonligSkattepliktig_base64, naeringsspeifikasjon_base64=naeringsspesifikasjoner_base64, dok_ref=dok_ref, innsendingstype=innsendingstype)
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Valider utkast sme med næringsopplysninger
def valider_sme(payload): url_valider = f'https://mp-test.sits.no/api/skattemelding/v2/valider/2021/{fnr}' header = dict(idporten_header) header["Content-Type"] = "application/xml" return s.post(url_valider, headers=header, data=payload) valider_respons = valider_sme(naering_enk) resultatAvValidering = xmltodict.parse(valider_respons.text)["skattemeldingOgNaeringsspesifikasjonResponse"]["resultatAvValidering"] if valider_respons: print(resultatAvValidering) print() print(xml.dom.minidom.parseString(valider_respons.text).toprettyxml()) else: print(valider_respons.status_code, valider_respons.headers, valider_respons.text)
validertMedFeil <?xml version="1.0" ?> <skattemeldingOgNaeringsspesifikasjonResponse xmlns="no:skatteetaten:fastsetting:formueinntekt:skattemeldingognaeringsspesifikasjon:response:v2"> <avvikVedValidering> <avvik> <avvikstype>xmlValideringsfeilPaaNaeringsopplysningene</avvikstype> </avvik> </avvikVedValidering> <resultatAvValidering>validertMedFeil</resultatAvValidering> <aarsakTilValidertMedFeil>xmlValideringsfeilPaaNaeringsopplysningene</aarsakTilValidertMedFeil> </skattemeldingOgNaeringsspesifikasjonResponse>
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Altinn 3 1. Hent Altinn Token2. Oppretter en ny instans av skjemaet3. Last opp vedlegg til skattemeldingen4. Oppdater skattemelding xml med referanse til vedlegg_id fra altinn3.5. Laster opp skattemeldingen og næringsopplysninger som et vedlegg
#1 altinn3_applikasjon = "skd/formueinntekt-skattemelding-v2" altinn_header = hent_altinn_token(idporten_header) #2 instans_data = opprett_ny_instans_med_inntektsaar(altinn_header, fnr, "2021", appnavn=altinn3_applikasjon)
{'Authorization': 'Bearer eyJhbGciOiJSUzI1NiIsImtpZCI6IjI3RTAyRTk4M0FCMUEwQzZEQzFBRjAyN0YyMUZFMUVFNENEQjRGRjEiLCJ4NXQiOiJKLUF1bURxeG9NYmNHdkFuOGhfaDdremJUX0UiLCJ0eXAiOiJKV1QifQ.eyJuYW1laWQiOiI4NTMzNyIsInVybjphbHRpbm46dXNlcmlkIjoiODUzMzciLCJ1cm46YWx0aW5uOnVzZXJuYW1lIjoibXVuaGplbSIsInVybjphbHRpbm46cGFydHlpZCI6NTAxMTA0OTUsInVybjphbHRpbm46YXV0aGVudGljYXRlbWV0aG9kIjoiTm90RGVmaW5lZCIsInVybjphbHRpbm46YXV0aGxldmVsIjozLCJjbGllbnRfYW1yIjoibm9uZSIsInBpZCI6IjI5MTE0NTAxMzE4IiwidG9rZW5fdHlwZSI6IkJlYXJlciIsImNsaWVudF9pZCI6IjhkN2FkYWQ3LWI0OTctNDBkMC04ODk3LTlhOWQ4NmM5NTMwNiIsImFjciI6IkxldmVsMyIsInNjb3BlIjoib3BlbmlkIHNrYXR0ZWV0YXRlbjpmb3JtdWVpbm50ZWt0L3NrYXR0ZW1lbGRpbmciLCJleHAiOjE2MzgzNDQ0NTUsImlhdCI6MTYzODI2NTExMywiY2xpZW50X29yZ25vIjoiOTc0NzYxMDc2IiwiY29uc3VtZXIiOnsiYXV0aG9yaXR5IjoiaXNvNjUyMy1hY3RvcmlkLXVwaXMiLCJJRCI6IjAxOTI6OTc0NzYxMDc2In0sImlzcyI6Imh0dHBzOi8vcGxhdGZvcm0udHQwMi5hbHRpbm4ubm8vYXV0aGVudGljYXRpb24vYXBpL3YxL29wZW5pZC8iLCJuYmYiOjE2MzgyNjUxMTN9.BYvu4hWxhFDTQSXrsxXA5EKBRUpt1v71AP22YkVCOhfoxhMqbes0x9QpKw6PQ6Xm8PtokJpWB-HeuPkG8nHPgQGMY4HV1_zlfxKjXQjYqYlPVT8tCwVJUaNUOcRHaA7zrEytMPUcohuIfRrBPMAyXF3fnETSm26YhLlHNqAWz5N5g6_GIiixDVzydp8WY3IWSb5U0u3zPEUgoSqqJr3DA9pUzhJrevusU386P9D57_Zm2ZRS3QZ4hvRSAmDjkfntTt0prnXmHFG1Qqv0BVdgmNRAzlgHVyH0KJVCrsFUU8_CxyKK6j4lvuDc4ELvvscypWdvTc1I_KFuXoGhQbY7cQ'}
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Last opp skattemelding Last først opp vedlegg som hører til skattemeldingenEksemplet nedenfor gjelder kun generelle vedlegg for skattemeldingen, ```xml En unik id levert av altinn når du laster opp vedleggsfilen vedlegg_eksempel_sirius_stjerne.jpg jpg dokumentertMarkedsverdi ```men samme prinsippet gjelder for andre kort som kan ha vedlegg. Husk at rekkefølgen på xml elementene har noe å si for å få validert xml'n
vedleggfil = "vedlegg_eksempel_sirius_stjerne.jpg" opplasting_respons = last_opp_vedlegg(instans_data, altinn_header, vedleggfil, content_type="image/jpeg", data_type="skattemelding-vedlegg", appnavn=altinn3_applikasjon) vedlegg_id = opplasting_respons.json()["id"] # Så må vi modifisere skattemeldingen slik at vi får med vedlegg idn inn skattemelding xml'n with open("../../../src/resources/eksempler/v2/personligSkattemeldingV9EksempelVedlegg.xml") as f: filnavn = Path(vedleggfil).name filtype = "jpg" partsnummer = xmltodict.parse(decoded_sme_xml["content"])["skattemelding"]["partsreferanse"] sme_xml = f.read().format(partsnummer=partsnummer, vedlegg_id=vedlegg_id, filnavn=filnavn, filtype=filtype) sme_xml_b64 = base64.b64encode(sme_xml.encode("utf-8")) sme_xml_b64 = str(sme_xml_b64.decode("utf-8")) #La oss validere at skattemeldingen fortsatt validerer mot valideringstjenesten naering_enk_med_vedlegg = valider_konvlutt_v2.format(sme_base64=sme_xml_b64, naeringsspeifikasjon_base64=naeringsspesifikasjoner_base64, dok_ref=dok_ref, innsendingstype=innsendingstype) valider_respons = valider_sme(naering_enk_med_vedlegg) resultat_av_validering_med_vedlegg = xmltodict.parse(valider_respons.text)["skattemeldingOgNaeringsspesifikasjonResponse"]["resultatAvValidering"] resultat_av_validering_med_vedlegg #Last opp skattemeldingen req_send_inn = last_opp_skattedata(instans_data, altinn_header, xml=naering_enk_med_vedlegg, data_type="skattemeldingOgNaeringsspesifikasjon", appnavn=altinn3_applikasjon) req_send_inn
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Sett statusen klar til henting av skatteetaten.
req_bekreftelse = endre_prosess_status(instans_data, altinn_header, "next", appnavn=altinn3_applikasjon) req_bekreftelse = endre_prosess_status(instans_data, altinn_header, "next", appnavn=altinn3_applikasjon) req_bekreftelse
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Sjekk status på altinn3 instansen om skatteetaten har hentet instansen.Denne statusen vil til å begynne med ha verdien "none". Oppdatering skjer når skatteetaten har behandlet innsendingen.- Ved **komplett**-innsending vil status oppdateres til Godkjent/Avvist når innsendingen er behandlet.- Ved **ikkeKomplett**-innsending vil status oppdateres til Tilgjengelig når innsendingen er behandlet. Etter innsending via SME vil den oppdateres til Godkjent/Avvist etter behandling.
instans_etter_bekreftelse = hent_instans(instans_data, altinn_header, appnavn=altinn3_applikasjon) response_data = instans_etter_bekreftelse.json() print(f"Instans-status: {response_data['status']['substatus']}")
Instans-status: None
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Se innsending i AltinnTa en slurk av kaffen og klapp deg selv på ryggen, du har nå sendt inn, la byråkratiet gjøre sin ting... og det tar litt tid. Pt så sjekker skatteeaten hos Altinn3 hvert 30 sek om det har kommet noen nye innsendinger. Skulle det gå mer enn et par minutter så har det mest sansynlig feilet. Før dere feilmelder noe til skatteetaten så må dere minimum ha med enten en korrelasjons-id eller instans-id for at vi skal kunne feilsøke Ikke komplett skattemelding1. Når du har fått svar i altinn innboksen, så kan du gå til https://skatt-sbstest.sits.no/web/skattemeldingen/20212. Her vil du se næringsinntekter overført fra skattemeldingen3. Når du har sendt inn i SME så vil du kunne se i altinn instansen at den har blitt avsluttet4. Kjør cellen nedenfor for å se at du har fått en ny fastsatt skattemelding og næringsopplysninger
print("Resultat av hent fastsatt før fastsetting") print(r_fastsatt.text) print("Resultat av hent fastsatt etter fastsetting") r_fastsatt2 = s.get(url_fastsatt) r_fastsatt2.text #r_fastsatt.elapsed.total_seconds()
_____no_output_____
Apache-2.0
docs/test/testinnsending/.ipynb_checkpoints/person-enk-med-vedlegg-2021-checkpoint.ipynb
Skatteetaten/skattemelding
Full Run In order to run the scripts, we need to be in the base directory. This will move us out of the notebooks directory and into the base directory
import os os.chdir('..')
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Define where each of the datasets are stored
Xtrain_dir = 'solar/data/kaggle_solar/train/' Xtest_dir = 'solar/data/kaggle_solar/test' ytrain_file = 'solar/data/kaggle_solar/train.csv' station_file = 'solar/data/kaggle_solar/station_info.csv' import numpy as np
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Define the parameters needed to run the analysis script.
# Choose up to 98 stations; not specifying a station means to use all that fall within the given lats and longs. If the # parameter 'all' is given, then it will use all stations no matter the provided lats and longs station = ['all'] # Determine which dates will be used to train the model. No specified date means use the entire set from 1994-01-01 # until 2007-12-31. train_dates = ['1994-01-01', '2007-12-31'] #2008-01-01 until 2012-11-30 test_dates = ['2008-01-01', '2012-11-30'] station_layout = True # Use all variables var = ['all'] # Keep model 0 (the default model) as a column for each of the variables (aggregated over other dimensions) model = [0] # Aggregate over all times times = ['all'] default_grid = {'type':'relative', 'axes':{'var':var, 'models':model, 'times':times, 'station':station}} # This just uses the station_names as another feature stat_names = {'type':'station_names'} frac_dist = {'type':'frac_dist'} days_solstice = {'type':'days_from_solstice'} days_cold = {'type':'days_from_coldest'} all_feats = [stat_names, default_grid, frac_dist, days_solstice, days_cold] #all_feats = [stat_names, days_solstice, days_cold]
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Define the directories that contain the code needed to run the analysis
import solar.report.submission import solar.wrangle.wrangle import solar.wrangle.subset import solar.wrangle.engineer import solar.analyze.model
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
Reload the modules to load in any code changes since the last run. Load in all of the data needed for the run and store in a pickle file. The 'external' flag determines whether to look to save the pickle file in a connected hard drive or to store locally. The information in pink shows what has been written to the log file.
# test combination of station names and grid reload(solar.wrangle.wrangle) reload(solar.wrangle.subset) reload(solar.wrangle.engineer) from solar.wrangle.wrangle import SolarData #external = True input_data = SolarData.load(Xtrain_dir, ytrain_file, Xtest_dir, station_file, \ train_dates, test_dates, station, \ station_layout, all_feats, 'extern') reload(solar.analyze.model) import numpy as np from solar.analyze.model import Model from sklearn.linear_model import Ridge from sklearn import metrics error_formula = 'mean_absolute_error' model = Model.model(input_data, Ridge, {'alpha':np.logspace(-3,1,10,base=10)}, 10, error_formula, 4, 'extern', normalize=True) reload(solar.analyze.model) import numpy as np from solar.analyze.model import Model from sklearn.ensemble import GradientBoostingRegressor from sklearn import metrics error_formula = 'mean_absolute_error' model = Model.model_from_pickle('input_2016-02-06-18-17-28.p', GradientBoostingRegressor, {'n_estimators':range(100,500, 100), 'learning_rate':np.logspace(-3,1,5,base=10)}, 10, error_formula, loss='ls', max_depth=1, random_state=0) reload(solar.report.submission) from solar.report.submission import Submission preds = Submission.submit_from_pickle('model_2016-02-06-18-21-41.p', 'input_2016-02-06-18-17-28.p', True)
_____no_output_____
MIT
notebooks/Full Run.ipynb
joelmpiper/ga_project
"Jupyter notebook"> "Setup and snippets for a smooth jupyter notebook experience"- toc: False- branch: master- categories: [code snippets, jupyter, python] Start jupyter notebook on boot Edit the crontab for your user.
crontab -e
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Add the following line.
@reboot source ~/.venv/venv/bin/activate; ~/.venv/venv/bin/jupyter-notebook
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
--- Magic Commands Autoreload imports when file changes were made.
%load_ext autoreload %autoreload 2
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Show matplotlib plots inside the notebook.
import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Measure excecution time of a cell.
%%time
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
`pip install` from jupyter notebook.
import sys !{sys.executable} -m pip install numpy
_____no_output_____
Apache-2.0
_notebooks/2020-03-02-jupyter-notebook.ipynb
fabge/snippets
Data Science Academy - Python Fundamentos - Capítulo 10 Download: http://github.com/dsacademybr
# Versão da Linguagem Python from platform import python_version print('Versão da Linguagem Python Usada Neste Jupyter Notebook:', python_version())
Versão da Linguagem Python Usada Neste Jupyter Notebook: 3.7.6
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Lab 4 - Construindo um Modelo de Regressão Linear com TensorFlow Use como referência o Deep Learning Book: http://www.deeplearningbook.com.br/ Obs: Embora a versão 2.x do TensorFlow já esteja disponível, este Jupyter Notebook usar a versão 1.15, que também é mantida pela equipe do Google.Caso queira aprender TensorFlow 2.0, esta versão já está disponível nos cursos da Formação IA, aqui na DSA.Execute a célula abaixo para instalar o TensorFlow na sua máquina.
# Versão do TensorFlow a ser usada !pip install -q tensorflow==1.15.2 # Imports import tensorflow as tf import numpy as np import matplotlib.pyplot as plt %matplotlib inline
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Definindo os hyperparâmetros do modelo
# Hyperparâmetros do modelo learning_rate = 0.01 training_epochs = 2000 display_step = 200
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Definindo os datasets de treino e de teste Considere X como o tamanho de uma casa e y o preço de uma casa
# Dataset de treino train_X = np.asarray([3.3,4.4,5.5,6.71,6.93,4.168,9.779,6.182,7.59,2.167,7.042,10.791,5.313,7.997,5.654,9.27,3.1]) train_y = np.asarray([1.7,2.76,2.09,3.19,1.694,1.573,3.366,2.596,2.53,1.221,2.827,3.465,1.65,2.904,2.42,2.94,1.3]) n_samples = train_X.shape[0] # Dataset de teste test_X = np.asarray([6.83, 4.668, 8.9, 7.91, 5.7, 8.7, 3.1, 2.1]) test_y = np.asarray([1.84, 2.273, 3.2, 2.831, 2.92, 3.24, 1.35, 1.03])
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Placeholders e variáveis
# Placeholders para as variáveis preditoras (x) e para variável target (y) X = tf.placeholder(tf.float32) y = tf.placeholder(tf.float32) # Pesos e bias do modelo W = tf.Variable(np.random.randn(), name="weight") b = tf.Variable(np.random.randn(), name="bias")
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Construindo o modelo
# Construindo o modelo linear # Fórmula do modelo linear: y = W*X + b linear_model = W*X + b # Mean squared error (erro quadrado médio) cost = tf.reduce_sum(tf.square(linear_model - y)) / (2*n_samples) # Otimização com Gradient descent optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost)
_____no_output_____
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Executando o grafo computacional, treinando e testando o modelo
# Definindo a inicialização das variáveis init = tf.global_variables_initializer() # Iniciando a sessão with tf.Session() as sess: # Iniciando as variáveis sess.run(init) # Treinamento do modelo for epoch in range(training_epochs): # Otimização com Gradient Descent sess.run(optimizer, feed_dict={X: train_X, y: train_y}) # Display de cada epoch if (epoch+1) % display_step == 0: c = sess.run(cost, feed_dict={X: train_X, y: train_y}) print("Epoch:{0:6} \t Custo (Erro):{1:10.4} \t W:{2:6.4} \t b:{3:6.4}".format(epoch+1, c, sess.run(W), sess.run(b))) # Imprimindo os parâmetros finais do modelo print("\nOtimização Concluída!") training_cost = sess.run(cost, feed_dict={X: train_X, y: train_y}) print("Custo Final de Treinamento:", training_cost, " - W Final:", sess.run(W), " - b Final:", sess.run(b), '\n') # Visualizando o resultado plt.plot(train_X, train_y, 'ro', label='Dados Originais') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Linha de Regressão') plt.legend() plt.show() # Testando o modelo testing_cost = sess.run(tf.reduce_sum(tf.square(linear_model - y)) / (2 * test_X.shape[0]), feed_dict={X: test_X, y: test_y}) print("Custo Final em Teste:", testing_cost) print("Diferença Média Quadrada Absoluta:", abs(training_cost - testing_cost)) # Display em Teste plt.plot(test_X, test_y, 'bo', label='Dados de Teste') plt.plot(train_X, sess.run(W) * train_X + sess.run(b), label='Linha de Regressão') plt.legend() plt.show() sess.close()
Epoch: 200 Custo (Erro): 0.2628 W:0.4961 b:-0.934 Epoch: 400 Custo (Erro): 0.1913 W:0.4433 b:-0.5603 Epoch: 600 Custo (Erro): 0.1473 W: 0.402 b:-0.2672 Epoch: 800 Custo (Erro): 0.1202 W:0.3696 b:-0.03732 Epoch: 1000 Custo (Erro): 0.1036 W:0.3441 b: 0.143 Epoch: 1200 Custo (Erro): 0.09331 W:0.3242 b:0.2844 Epoch: 1400 Custo (Erro): 0.087 W:0.3085 b:0.3954 Epoch: 1600 Custo (Erro): 0.08313 W:0.2963 b:0.4824 Epoch: 1800 Custo (Erro): 0.08074 W:0.2866 b:0.5506 Epoch: 2000 Custo (Erro): 0.07927 W:0.2791 b:0.6041 Otimização Concluída! Custo Final de Treinamento: 0.07927451 - W Final: 0.2790933 - b Final: 0.60413384
MIT
pyfund/Cap10/Notebooks/DSA-Python-Cap10-Lab04.ipynb
guimaraesalves/Data-Science-Academy
Bay Area Bike Share Analysis Introduction> **Tip**: Quoted sections like this will provide helpful instructions on how to navigate and use an iPython notebook.[Bay Area Bike Share](http://www.bayareabikeshare.com/) is a company that provides on-demand bike rentals for customers in San Francisco, Redwood City, Palo Alto, Mountain View, and San Jose. Users can unlock bikes from a variety of stations throughout each city, and return them to any station within the same city. Users pay for the service either through a yearly subscription or by purchasing 3-day or 24-hour passes. Users can make an unlimited number of trips, with trips under thirty minutes in length having no additional charge; longer trips will incur overtime fees.In this project, you will put yourself in the shoes of a data analyst performing an exploratory analysis on the data. You will take a look at two of the major parts of the data analysis process: data wrangling and exploratory data analysis. But before you even start looking at data, think about some questions you might want to understand about the bike share data. Consider, for example, if you were working for Bay Area Bike Share: what kinds of information would you want to know about in order to make smarter business decisions? Or you might think about if you were a user of the bike share service. What factors might influence how you would want to use the service?**Question 1**: Write at least two questions you think could be answered by data.**Answer**: What trajectory has more users and what time of the day that happens. Using Visualizations to Communicate Findings in DataAs a data analyst, the ability to effectively communicate findings is a key part of the job. After all, your best analysis is only as good as your ability to communicate it.In 2014, Bay Area Bike Share held an [Open Data Challenge](http://www.bayareabikeshare.com/datachallenge-2014) to encourage data analysts to create visualizations based on their open data set. You’ll create your own visualizations in this project, but first, take a look at the [submission winner for Best Analysis](http://thfield.github.io/babs/index.html) from Tyler Field. Read through the entire report to answer the following question:**Question 2**: What visualizations do you think provide the most interesting insights? Are you able to answer either of the questions you identified above based on Tyler’s analysis? Why or why not?**Answer**: I was able to answer one question because there is a specific graphs for it. The most interesting graph was definitely the interactive one, that crossed information with games, rainy days and temperature. Another very useful graph is the one with the heatmap diagram of systemwide rides.This heatmap provides the answer to "what trajectory has the most users", where Harry Bridges Plaza to Embarcadero at Sansome is the winner with 1330 rides.I could not find in what time of day does this trajectory has the most users, unfortunately. Data WranglingNow it's time to explore the data for yourself. Year 1 and Year 2 data from the Bay Area Bike Share's [Open Data](http://www.bayareabikeshare.com/open-data) page have already been provided with the project materials; you don't need to download anything extra. The data comes in three parts: the first half of Year 1 (files starting `201402`), the second half of Year 1 (files starting `201408`), and all of Year 2 (files starting `201508`). There are three main datafiles associated with each part: trip data showing information about each trip taken in the system (`*_trip_data.csv`), information about the stations in the system (`*_station_data.csv`), and daily weather data for each city in the system (`*_weather_data.csv`).When dealing with a lot of data, it can be useful to start by working with only a sample of the data. This way, it will be much easier to check that our data wrangling steps are working since our code will take less time to complete. Once we are satisfied with the way things are working, we can then set things up to work on the dataset as a whole.Since the bulk of the data is contained in the trip information, we should target looking at a subset of the trip data to help us get our bearings. You'll start by looking at only the first month of the bike trip data, from 2013-08-29 to 2013-09-30. The code below will take the data from the first half of the first year, then write the first month's worth of data to an output file. This code exploits the fact that the data is sorted by date (though it should be noted that the first two days are sorted by trip time, rather than being completely chronological).First, load all of the packages and functions that you'll be using in your analysis by running the first code cell below. Then, run the second code cell to read a subset of the first trip data file, and write a new file containing just the subset we are initially interested in.> **Tip**: You can run a code cell like you formatted Markdown cells by clicking on the cell and using the keyboard shortcut **Shift** + **Enter** or **Shift** + **Return**. Alternatively, a code cell can be executed using the **Play** button in the toolbar after selecting it. While the cell is running, you will see an asterisk in the message to the left of the cell, i.e. `In [*]:`. The asterisk will change into a number to show that execution has completed, e.g. `In [1]`. If there is output, it will show up as `Out [1]:`, with an appropriate number to match the "In" number.
# import all necessary packages and functions. import csv from datetime import datetime import numpy as np import pandas as pd from babs_datacheck import question_3 from babs_visualizations import usage_stats, usage_plot from IPython.display import display %matplotlib inline # file locations file_in = '201402_trip_data.csv' file_out = '201309_trip_data.csv' with open(file_out, 'w') as f_out, open(file_in, 'r') as f_in: # set up csv reader and writer objects in_reader = csv.reader(f_in) out_writer = csv.writer(f_out) # write rows from in-file to out-file until specified date reached while True: datarow = next(in_reader) # trip start dates in 3rd column, m/d/yyyy HH:MM formats if datarow[2][:9] == '10/1/2013': break out_writer.writerow(datarow)
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Condensing the Trip DataThe first step is to look at the structure of the dataset to see if there's any data wrangling we should perform. The below cell will read in the sampled data file that you created in the previous cell, and print out the first few rows of the table.
sample_data = pd.read_csv('201309_trip_data.csv') display(sample_data.head())
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
In this exploration, we're going to concentrate on factors in the trip data that affect the number of trips that are taken. Let's focus down on a few selected columns: the trip duration, start time, start terminal, end terminal, and subscription type. Start time will be divided into year, month, and hour components. We will also add a column for the day of the week and abstract the start and end terminal to be the start and end _city_.Let's tackle the lattermost part of the wrangling process first. Run the below code cell to see how the station information is structured, then observe how the code will create the station-city mapping. Note that the station mapping is set up as a function, `create_station_mapping()`. Since it is possible that more stations are added or dropped over time, this function will allow us to combine the station information across all three parts of our data when we are ready to explore everything.
# Display the first few rows of the station data file. station_info = pd.read_csv('201402_station_data.csv') display(station_info.head()) # This function will be called by another function later on to create the mapping. def create_station_mapping(station_data): """ Create a mapping from station IDs to cities, returning the result as a dictionary. """ station_map = {} for data_file in station_data: with open(data_file, 'r') as f_in: # set up csv reader object - note that we are using DictReader, which # takes the first row of the file as a header row for each row's # dictionary keys weather_reader = csv.DictReader(f_in) for row in weather_reader: station_map[row['station_id']] = row['landmark'] return station_map
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
You can now use the mapping to condense the trip data to the selected columns noted above. This will be performed in the `summarise_data()` function below. As part of this function, the `datetime` module is used to **p**arse the timestamp strings from the original data file as datetime objects (`strptime`), which can then be output in a different string **f**ormat (`strftime`). The parsed objects also have a variety of attributes and methods to quickly obtainThere are two tasks that you will need to complete to finish the `summarise_data()` function. First, you should perform an operation to convert the trip durations from being in terms of seconds to being in terms of minutes. (There are 60 seconds in a minute.) Secondly, you will need to create the columns for the year, month, hour, and day of the week. Take a look at the [documentation for datetime objects in the datetime module](https://docs.python.org/2/library/datetime.htmldatetime-objects). **Find the appropriate attributes and method to complete the below code.**
def summarise_data(trip_in, station_data, trip_out): """ This function takes trip and station information and outputs a new data file with a condensed summary of major trip information. The trip_in and station_data arguments will be lists of data files for the trip and station information, respectively, while trip_out specifies the location to which the summarized data will be written. """ # generate dictionary of station - city mapping station_map = create_station_mapping(station_data) with open(trip_out, 'w') as f_out: # set up csv writer object out_colnames = ['duration', 'start_date', 'start_year', 'start_month', 'start_hour', 'weekday', 'start_city', 'end_city', 'subscription_type'] trip_writer = csv.DictWriter(f_out, fieldnames = out_colnames) trip_writer.writeheader() for data_file in trip_in: with open(data_file, 'r') as f_in: # set up csv reader object trip_reader = csv.DictReader(f_in) # collect data from and process each row for row in trip_reader: new_point = {} # convert duration units from seconds to minutes ### Question 3a: Add a mathematical operation below ### ### to convert durations from seconds to minutes. ### new_point['duration'] = float(row['Duration'])/60 # reformat datestrings into multiple columns ### Question 3b: Fill in the blanks below to generate ### ### the expected time values. ### trip_date = datetime.strptime(row['Start Date'], '%m/%d/%Y %H:%M') new_point['start_date'] = trip_date.strftime('%Y-%m-%d') new_point['start_year'] = trip_date.strftime('%Y') new_point['start_month'] = trip_date.strftime('%m') new_point['start_hour'] = trip_date.strftime('%H') new_point['weekday'] = trip_date.strftime('%A') # remap start and end terminal with start and end city new_point['start_city'] = station_map[row['Start Terminal']] new_point['end_city'] = station_map[row['End Terminal']] # two different column names for subscribers depending on file if 'Subscription Type' in row: new_point['subscription_type'] = row['Subscription Type'] else: new_point['subscription_type'] = row['Subscriber Type'] # write the processed information to the output file. trip_writer.writerow(new_point)
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
**Question 3**: Run the below code block to call the `summarise_data()` function you finished in the above cell. It will take the data contained in the files listed in the `trip_in` and `station_data` variables, and write a new file at the location specified in the `trip_out` variable. If you've performed the data wrangling correctly, the below code block will print out the first few lines of the dataframe and a message verifying that the data point counts are correct.
# Process the data by running the function we wrote above. station_data = ['201402_station_data.csv'] trip_in = ['201309_trip_data.csv'] trip_out = '201309_trip_summary.csv' summarise_data(trip_in, station_data, trip_out) # Load in the data file and print out the first few rows sample_data = pd.read_csv(trip_out) display(sample_data.head()) # Verify the dataframe by counting data points matching each of the time features. question_3(sample_data)
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
> **Tip**: If you save a jupyter Notebook, the output from running code blocks will also be saved. However, the state of your workspace will be reset once a new session is started. Make sure that you run all of the necessary code blocks from your previous session to reestablish variables and functions before picking up where you last left off. Exploratory Data AnalysisNow that you have some data saved to a file, let's look at some initial trends in the data. Some code has already been written for you in the `babs_visualizations.py` script to help summarize and visualize the data; this has been imported as the functions `usage_stats()` and `usage_plot()`. In this section we'll walk through some of the things you can do with the functions, and you'll use the functions for yourself in the last part of the project. First, run the following cell to load the data, then use the `usage_stats()` function to see the total number of trips made in the first month of operations, along with some statistics regarding how long trips took.
trip_data = pd.read_csv('201309_trip_summary.csv') usage_stats(trip_data)
There are 27345 data points in the dataset. The average duration of trips is 27.60 minutes. The median trip duration is 10.72 minutes. 25% of trips are shorter than 6.82 minutes. 25% of trips are longer than 17.28 minutes.
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
You should see that there are over 27,000 trips in the first month, and that the average trip duration is larger than the median trip duration (the point where 50% of trips are shorter, and 50% are longer). In fact, the mean is larger than the 75% shortest durations. This will be interesting to look at later on.Let's start looking at how those trips are divided by subscription type. One easy way to build an intuition about the data is to plot it. We'll use the `usage_plot()` function for this. The second argument of the function allows us to count up the trips across a selected variable, displaying the information in a plot. The expression below will show how many customer and how many subscriber trips were made. Try it out!
usage_plot(trip_data, 'subscription_type')
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Seems like there's about 50% more trips made by subscribers in the first month than customers. Let's try a different variable now. What does the distribution of trip durations look like?
usage_plot(trip_data, 'duration')
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Looks pretty strange, doesn't it? Take a look at the duration values on the x-axis. Most rides are expected to be 30 minutes or less, since there are overage charges for taking extra time in a single trip. The first bar spans durations up to about 1000 minutes, or over 16 hours. Based on the statistics we got out of `usage_stats()`, we should have expected some trips with very long durations that bring the average to be so much higher than the median: the plot shows this in a dramatic, but unhelpful way.When exploring the data, you will often need to work with visualization function parameters in order to make the data easier to understand. Here's where the third argument of the `usage_plot()` function comes in. Filters can be set for data points as a list of conditions. Let's start by limiting things to trips of less than 60 minutes.
usage_plot(trip_data, 'duration', ['duration < 60'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
This is looking better! You can see that most trips are indeed less than 30 minutes in length, but there's more that you can do to improve the presentation. Since the minimum duration is not 0, the left hand bar is slighly above 0. We want to be able to tell where there is a clear boundary at 30 minutes, so it will look nicer if we have bin sizes and bin boundaries that correspond to some number of minutes. Fortunately, you can use the optional "boundary" and "bin_width" parameters to adjust the plot. By setting "boundary" to 0, one of the bin edges (in this case the left-most bin) will start at 0 rather than the minimum trip duration. And by setting "bin_width" to 5, each bar will count up data points in five-minute intervals.
usage_plot(trip_data, 'duration', ['duration < 60'], boundary = 0, bin_width = 5)
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
**Question 4**: Which five-minute trip duration shows the most number of trips? Approximately how many trips were made in this range?**Answer**: 5 to 10 minutes trip, with approximately 9.000 trips. Visual adjustments like this might be small, but they can go a long way in helping you understand the data and convey your findings to others. Performing Your Own AnalysisNow that you've done some exploration on a small sample of the dataset, it's time to go ahead and put together all of the data in a single file and see what trends you can find. The code below will use the same `summarise_data()` function as before to process data. After running the cell below, you'll have processed all the data into a single data file. Note that the function will not display any output while it runs, and this can take a while to complete since you have much more data than the sample you worked with above.
station_data = ['201402_station_data.csv', '201408_station_data.csv', '201508_station_data.csv' ] trip_in = ['201402_trip_data.csv', '201408_trip_data.csv', '201508_trip_data.csv' ] trip_out = 'babs_y1_y2_summary.csv' # This function will take in the station data and trip data and # write out a new data file to the name listed above in trip_out. summarise_data(trip_in, station_data, trip_out)
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Since the `summarise_data()` function has created a standalone file, the above cell will not need to be run a second time, even if you close the notebook and start a new session. You can just load in the dataset and then explore things from there.
trip_data = pd.read_csv('babs_y1_y2_summary.csv') display(trip_data.head())
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Now it's your turn to explore the new dataset with `usage_stats()` and `usage_plot()` and report your findings! Here's a refresher on how to use the `usage_plot()` function:- first argument (required): loaded dataframe from which data will be analyzed.- second argument (required): variable on which trip counts will be divided.- third argument (optional): data filters limiting the data points that will be counted. Filters should be given as a list of conditions, each element should be a string in the following format: `' '` using one of the following operations: >, =, <=, ==, !=. Data points must satisfy all conditions to be counted or visualized. For example, `["duration < 15", "start_city == 'San Francisco'"]` retains only trips that originated in San Francisco and are less than 15 minutes long.If data is being split on a numeric variable (thus creating a histogram), some additional parameters may be set by keyword.- "n_bins" specifies the number of bars in the resultant plot (default is 10).- "bin_width" specifies the width of each bar (default divides the range of the data by number of bins). "n_bins" and "bin_width" cannot be used simultaneously.- "boundary" specifies where one of the bar edges will be placed; other bar edges will be placed around that value (this may result in an additional bar being plotted). This argument may be used alongside the "n_bins" and "bin_width" arguments.You can also add some customization to the `usage_stats()` function as well. The second argument of the function can be used to set up filter conditions, just like how they are set up in `usage_plot()`.
usage_stats(trip_data) usage_plot(trip_data,'start_hour',['subscription_type == Subscriber'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Explore some different variables using the functions above and take note of some trends you find. Feel free to create additional cells if you want to explore the dataset in other ways or multiple ways.> **Tip**: In order to add additional cells to a notebook, you can use the "Insert Cell Above" and "Insert Cell Below" options from the menu bar above. There is also an icon in the toolbar for adding new cells, with additional icons for moving the cells up and down the document. By default, new cells are of the code type; you can also specify the cell type (e.g. Code or Markdown) of selected cells from the Cell menu or the dropdown in the toolbar.One you're done with your explorations, copy the two visualizations you found most interesting into the cells below, then answer the following questions with a few sentences describing what you found and why you selected the figures. Make sure that you adjust the number of bins or the bin limits so that they effectively convey data findings. Feel free to supplement this with any additional numbers generated from `usage_stats()` or place multiple visualizations to support your observations.
# Final Plot 1 usage_plot(trip_data,'start_hour',['subscription_type == Subscriber'],bin_width=1) usage_plot(trip_data,'weekday',['subscription_type == Subscriber'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
**Question 5a**: What is interesting about the above visualization? Why did you select it?**Answer**: Both graphs show that most Subscribers use the service to go to work, since vast majority happened on weekdays, and between 7-9 AM and 4-5 PM.
# Final Plot 2 usage_plot(trip_data,'start_month',['subscription_type == Customer'], boundary = 1) usage_plot(trip_data,'start_month',['subscription_type == Customer'], boundary = 1, n_bins=12) usage_plot(trip_data,'weekday',['subscription_type == Customer','start_month > 6'],bin_width=30) usage_plot(trip_data,'start_city',['subscription_type == Customer'])
_____no_output_____
MIT
Project Bike Sharing/Bay_Area_Bike_Share_Analysis.ipynb
afshimono/data_analyst_nanodegree
Apsidal Motion Age for HD 144548Here, I am attempting to derive an age for the triple eclipsing hierarchical triple HD 144548 (Upper Scoripus member) based on the observed orbital precession (apsidal motion) of the inner binary system's orbit about the tertiary companion (star A). A value for the orbital precession is reported by Alonso et al. ([2015, arXiv: 1510.03773](http://adsabs.harvard.edu/abs/2015arXiv151003773A)) as $\dot{\omega} = 0.0235 \pm 0.002\ {\rm deg\, cycle}^{-1}$, obtained from photo-dynamical modeling of a _Kepler_/K2 lightcurve. The technique of determining an age from apsidal motion observed in young binary systems is detailed by Feiden & Dotter ([2013, ApJ, 765, 86](http://adsabs.harvard.edu/abs/2013ApJ...765...86F)). Their technique relies heavily on the analytical framework for the classical theory of orbital precession due to tidal and rotational distortions of graviational potentials by Kopal ([1978, ASSL, 68](http://adsabs.harvard.edu/abs/1978ASSL...68.....K)) and the inclusion of general relativistic orbital precession by Giménez ([1985, ApJ, 297, 405](http://adsabs.harvard.edu/abs/1985ApJ...297..405G)). In brief, the technique outlined by Feiden & Dotter (2013) relies on the fact that young stars are contracting quasi-hydrostatically as they approach the main-sequence. As they contract, the mean density of the increases (assuming the star has constant mass), thereby altering the distribution of mass with respect to the mean density. This alters the interior structure parameter, which is related to the deviation from sphericity of the star and its resulting gravitational potential. A non-symmetric potential induces a precession of the point of periastron in a companion star's orbit, provided the orbit is eccentric. Since the internal density structure of a young star is changing as it contracts, the inferred interior structure parameter, and thus the induced pertrubation on the star's gravitational potential, also changes. Therefore, the rate at which the precision of a binary companions point of periastron occurs changes as a function of time. By measuring the rate of precession, one can then estimate the age of the system by inferring the required density distribution to induce that rate of precession, subject to the constraint that the orbital and stellar fundamental properties must be well determined - hence the reason why Feiden & Dotter (2013) focused exclusively on eclipsing binary systems.While a rate of orbital precession was measured by Alonso et al. (2015) for HD 144548, and the properties of all three stars were determined with reasonable precision, there is a fundamental difficulty: it's a triple system. The method outlined by Feiden & Dotter (2013) was intended for binary systems, with no discussion of the influence of a tertiary companion.Fortunately, the measured orbtial precision is for the orbit of the inner binary (Ba/Bb) about the tertiary star (A). Below, I focus on modeling the inner binary as a single object orbiting the tertiary star with a mass equal to the sum of the component masses (thus more massive than component A). The first big hurdle is to figure out how to treat the Ba/Bb component as a single star. For an initial attempt, we can assume that the B component is a "single" star with a mass equal to the total mass of the binary system with an interior structure constant equal to the weighted mean of the two individual interior structure constants.To compute the mean interior structure constants, we first need to compute the individual weights $c_{2, i}$. For $e = 0$, we have $f(e) = g(e) = 1$.
def c2(masses, radii, e, a, rotation=None): f = (1.0 - e**2)**-2 g = (8.0 + 12.0*e**2 + e**4)*f**(5.0/2.0) / 8.0 if rotation == None: omega_ratio_sq = 0.0 elif rotation == 'synchronized': omega_ratio_sq = (1.0 + e)/(1.0 - e)**3 else: omega_ratio_sq = 0.0 c2_0 = (omega_ratio_sq*(1.0 + masses[1]/masses[0])*f + 15.0*g*masses[1]/masses[0])*(radii[0]/a)**5 c2_1 = (omega_ratio_sq*(1.0 + masses[0]/masses[1])*f + 15.0*g*masses[0]/masses[1])*(radii[1]/a)**5 return c2_0, c2_1 # parameters for the orbit of Ba/Bb e = 0.0015 a = 7.249776 masses = [0.984, 0.944] # c2_B = c2(masses, radii, e, a)
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
What complicates the issue is that the interior structure constants for the B components also vary as a function of age, so we need to compute a mean mass track using the $c_2$ coefficients and the individual $k_2$ values.
import numpy as np trk_Ba = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m0980_GS98_p000_p0_y28_mlt1.884.trk') trk_Bb = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m0940_GS98_p000_p0_y28_mlt1.884.trk')
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
Create tracks with equally spaced time steps.
from scipy.interpolate import interp1d log10_age = np.arange(6.0, 8.0, 0.01) # log10(age/yr) ages = 10**log10_age icurve = interp1d(trk_Ba[:,0], trk_Ba, kind='linear', axis=0) new_trk_Ba = icurve(ages) icurve = interp1d(trk_Bb[:,0], trk_Bb, kind='linear', axis=0) new_trk_Bb = icurve(ages)
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
Now, compute the $c_2$ coefficients for each age.
mean_trk_B = np.empty((len(ages), 3)) for i, age in enumerate(ages): c2s = c2(masses, [10**new_trk_Ba[i, 4], 10**new_trk_Bb[i, 4]], e, a, rotation='synchronized') avg_k2 = (c2s[0]*new_trk_Ba[i, 10] + c2s[1]*new_trk_Bb[i, 10])/(sum(c2s)) mean_trk_B[i] = np.array([age, 10**new_trk_Ba[i, 4] + 10**new_trk_Bb[i, 4], avg_k2])
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
With that, we have an estimate for the mean B component properties as a function of age. One complicating factor is the "radius" of the average B component. If we are modeling the potential created by the Ba/Bb components as that of a single star, we need to assume that the A component never enters into any region of the combined potential that is dominated by either component.Unfortunately, it is very likely that the ratio of the Ba/Bb binary "radius" to the semi-major axis of the A/B orbit is going to be a dominant contributor to the apsidal motion. Attempt 1: Semi-major axis + radius of B componentLet's define orbtial properties of the (A, B) system.
e2 = 0.2652 a2 = 66.2319 masses_2 = [1.44, 1.928] trk_A = np.genfromtxt('/Users/grefe950/evolve/dmestar/trk/gs98/p000/a0/amlt1884/m1450_GS98_p000_p0_y28_mlt1.884.trk', usecols=(0,1,2,3,4,5,6,7,8,9,10)) icurve = interp1d(trk_A[:,0], trk_A, kind='linear', axis=0) new_trk_A = icurve(ages)
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
We are now in a position to compute the classical apsidal motion rate from the combined A/B tracks.
cl_apsidal_motion_rate = np.empty((len(ages), 2)) for i, age in enumerate(ages): c2_AB = c2(masses_2, [10**new_trk_A[i, 4], a + 0.5*mean_trk_B[i, 1]], e2, a2) cl_apsidal_motion_rate[i] = np.array([age, 360.0*(c2_AB[0]*new_trk_A[i, 10] + c2_AB[1]*mean_trk_B[i, 2])]) GR_apsidal_motion_rate = 5.45e-4*(sum(masses)/33.945)**(2./3.) / (1.0 - e2**2) # Giménez (1985) GR_apsidal_motion_rate
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
One can see from this that the general relativistic component is a very small contribution to the total apsidal motion of the system. Let's look at the evolution of the apsidal motion for the A/B binary system.
%matplotlib inline import matplotlib.pyplot as plt fig, ax = plt.subplots(1, 1, figsize=(8., 8.), sharex=True) ax.grid(True) ax.tick_params(which='major', axis='both', length=15., labelsize=18.) ax.set_xlabel('Age (yr)', fontsize=20., family='serif') ax.set_ylabel('Apsidal Motion Rate (deg / cycle)', fontsize=20., family='serif') ax.plot([1.0e6, 1.0e8], [0.0215, 0.0215], '--', lw=1, c='#555555') ax.plot([1.0e6, 1.0e8], [0.0255, 0.0255], '--', lw=1, c='#555555') ax.semilogx(cl_apsidal_motion_rate[:, 0], cl_apsidal_motion_rate[:, 1], '-', lw=2, c='#b22222')
_____no_output_____
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
How sensitive is this to the properties of the A component, which are fairly uncertain?
icurve = interp1d(cl_apsidal_motion_rate[:,1], cl_apsidal_motion_rate[:,0], kind='linear') print icurve(0.0235)/1.0e6, icurve(0.0255)/1.0e6, icurve(0.0215)/1.0e6
11.2030404132 9.66153795127 12.8365039818
MIT
Projects/upper_sco_age/apsidal_motion_age.ipynb
gfeiden/Notebook
CORD19 Analysis
%matplotlib inline # import nltk # nltk.download('stopwords') # nltk.download('punkt') # nltk.download('averaged_perceptron_tagger') import json import yaml import os import nltk import matplotlib.pyplot as plt import re import pandas as pd from nltk.corpus import stopwords #import plotly.graph_objects as go import networkx as nx
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Configurations
# import configurations with open('config.yaml','r') as ymlfile: cfg = yaml.load(ymlfile)
C:\Users\david\Anaconda3\lib\site-packages\ipykernel_launcher.py:3: YAMLLoadWarning: calling yaml.load() without Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details. This is separate from the ipykernel package so we can avoid doing imports until
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
General Functions
def get_papers(path): # get list of papers .json from path papers = [] for file_name in os.listdir(path): papers.append(file_name) return papers def extract_authors(authors_list): ''' Function to extract authors metadata from list of authors ''' authors = [] for curr_author in authors_list: author = {} author['first'] = curr_author['first'] author['middle'] = curr_author['middle'] author['last'] = curr_author['last'] authors.append(author) return authors def extract_abstract(abstract): # use regex to remove 'word count: 194', get word count ? # use regex to remove Text word count: 5168', get text word count ? # remove 1,2 digit numbers that don't have text attached ? stop_sentences = ['All rights reserved.','No reuse allowed without permission.','Abstract','author/funder'] abstract_text = '' for section in abstract: abstract_text = abstract_text + ' ' + section['text'] abstract_text = abstract_text.strip(" ") for s in stop_sentences: abstract_text = abstract_text.replace(s,"") return abstract_text def extract_references(bib_entries): refs = [] for r in bib_entries: ref = {} ref['id'] = bib_entries[r]['ref_id'] ref['title'] = bib_entries[r]['title'] ref['authors'] = bib_entries[r]['authors'] ref['year'] = bib_entries[r]['year'] refs.append(ref) return refs def extract_paper_metadata(paper): paper_metadata = {} paper_metadata['id'] = paper['paper_id'] paper_metadata['title'] = paper['metadata']['title'] paper_metadata['authors'] = extract_authors(paper['authors']) paper_metadata['abstract'] = extract_abstract(paper['abstract']) paper_metadata['refs'] = extract_references(paper['bib_entries']) return paper_metadata def get_paper_data(path, paper_id): file_path = os.path.join(path, paper_id) with open(file_path, 'r') as f: paper_info = json.load(f) return paper_info
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Objects
class Author: def __init__(self, firstname, middlename, lastname): self.firstName = firstname self.middleName = middlename self.lastName = lastname def __str__(self): return '{} {} {}'.format(self.firstName, self.middleName, self.lastName) class Paper: def __init__(self, sha, title='', authors=None, date=None): self.id = sha self.title = title self.authors = authors self.date = date self.url = '' def __str__(self): s = '' s += 'Paper ID: {}\n'.format(self.id) s += 'Title: {}\n'.format(self.title) if self.authors: s += '# Authors: {}\n'.format(len(self.authors)) else: s += '# Authors: 0\n' s += 'Date: {}\n'.format(self.date) s += 'URL: {}'.format(self.url) return s path = cfg['data-path'] + biorxiv print(path) #paper = json.loads(path)
C:\Users\david\OneDrive\Bureau\CORD-19-research-challenge\\2020-03-13\biorxiv_medrxiv\biorxiv_medrxiv
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Metadata
meta = '2020-03-13/all_sources_metadata_2020-03-13.csv' df_meta = pd.read_csv(cfg['data-path'] + meta) df_meta.head() df_meta[df_meta['has_full_text']==True] df_meta.info() df_meta['source_x'].unique() paper_ids = set(df_meta.iloc[:,0]) paper_ids.pop() paper_ids df_meta[df_meta['source_x']=='biorxiv'][['sha','doi']]
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
biorxiv_medrxiv
biorxiv = '\\2020-03-13\\biorxiv_medrxiv\\biorxiv_medrxiv' path = cfg['data-path'] + biorxiv papers = get_papers(path) cnt = 0 # check if paper are in metadata dataframe for paper in papers: if paper[:-5] not in paper_ids: print(paper) else: cnt += 1 print('There are {}/{} papers present in the metadataset.'.format(cnt, len(papers))) print('Examples:') for paper in papers[:5]: print(paper) paper_info = get_paper_data(cfg['data-path'] + biorxiv, papers[10]) paper_info extract_paper_metadata(paper_info) df_meta[df_meta['sha']==paper_info['id']]
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
pmc_custom_license
pmc = '2020-03-13\pmc_custom_license\pmc_custom_license' path = cfg['data-path'] + pmc pmc_papers = get_papers(path) pmc_papers[:5] cnt = 0 # check if paper are in metadata dataframe for paper in pmc_papers: if paper[:-5] not in paper_ids: print(paper) else: cnt += 1 print('There are {}/{} papers present in the metadataset.'.format(cnt, len(pmc_papers))) paper_info = get_paper_data(path, pmc_papers[10]) paper_info df_meta[df_meta['sha']=='0036e8891c93ae63611bde179ada1e03e8577dea']
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
comm_use_subset noncomm_use_subset
# extract data from all papers all_papers_data = [] for paper_name in papers: file_path = os.path.join(path,paper_name) with open(file_path, 'r') as f: paper_info = extract_paper_metadata(json.load(f)) all_papers_data.append(paper_info) for i in range(10): print('- {}'.format(all_papers_data[i]['title'])) # get json data of current paper file_path = os.path.join(path,papers[0]) with open(file_path, 'r') as f: paper = extract_paper_metadata(json.load(f)) print(paper['id'])
0015023cc06b5362d332b3baf348d11567ca2fbb
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Authors
def are_equal(author1, author2): if (author1['first'][0] == author2['first'][0]) and (author1['mid'] == author2['mid']) and (author1['last'] == author2['last']): return True class Author: def __init__(self, firstname, middlename, lastname): self.firstName = firstname self.middleName = middlename self.lastName = lastname self.papers = [] def __str__(self): return '{} {} {}'.format(self.firstName, self.middleName, self.lastName) authors = []
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Co-Authors
from itertools import combinations co_authors_net = nx.Graph() # for each paper for i in range(len(all_papers_data)): # get list of authors co_authors = [] for author in all_papers_data[i]['authors']: author_full_name = '' # only keep authors with first and last names if author['first'] and author['last']: author_full_name += author['first'] for initial in author['middle']: author_full_name += ' ' + initial author_full_name += ' ' + author['last'] author_full_name.strip(' ') co_authors.append(author_full_name) #print(co_authors) for combo in combinations(co_authors,2): co_authors_net.add_edge(combo[0],combo[1]) #print('-'*60) for i in combinations([1,2,3],2): print(i) nx.draw(co_authors_net, node_color='blue',node_size=10) plt.savefig("graph.png", dpi=1000)
C:\Users\david\Anaconda3\lib\site-packages\networkx\drawing\nx_pylab.py:611: MatplotlibDeprecationWarning: isinstance(..., numbers.Number) if cb.is_numlike(alpha):
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Reference Authors
for i in range(3): for author in all_papers_data[i]['authors']: print(author) # referenced authors for ref in all_papers_data[i]['refs']: for author in ref['authors']: print(author) print('-'*60)
{'first': 'Joseph', 'middle': ['C'], 'last': 'Ward'} {'first': 'Lidia', 'middle': [], 'last': 'Lasecka-Dykes'} {'first': 'Chris', 'middle': [], 'last': 'Neil'} {'first': 'Oluwapelumi', 'middle': [], 'last': 'Adeyemi'} {'first': 'Sarah', 'middle': [], 'last': ''} {'first': '', 'middle': [], 'last': 'Gold'} {'first': 'Niall', 'middle': [], 'last': 'Mclean'} {'first': 'Caroline', 'middle': [], 'last': 'Wright'} {'first': 'Morgan', 'middle': ['R'], 'last': 'Herod'} {'first': 'David', 'middle': [], 'last': 'Kealy'} {'first': 'Emma', 'middle': [], 'last': ''} {'first': 'Warner', 'middle': [], 'last': ''} {'first': 'Donald', 'middle': ['P'], 'last': 'King'} {'first': 'Tobias', 'middle': ['J'], 'last': 'Tuthill'} {'first': 'David', 'middle': ['J'], 'last': 'Rowlands'} {'first': 'Nicola', 'middle': ['J'], 'last': ''} {'first': 'Stonehouse', 'middle': [], 'last': 'A#'} {'first': 'T', 'middle': [], 'last': 'Jackson', 'suffix': ''} {'first': 'T', 'middle': ['J'], 'last': 'Tuthill', 'suffix': ''} {'first': 'D', 'middle': ['J'], 'last': 'Rowlands', 'suffix': ''} {'first': 'N', 'middle': ['J'], 'last': 'Stonehouse', 'suffix': ''} {'first': 'N', 'middle': ['D'], 'last': 'Sanderson', 'suffix': ''} {'first': 'N', 'middle': ['J'], 'last': 'Knowles', 'suffix': ''} {'first': 'D', 'middle': ['P'], 'last': 'King', 'suffix': ''} {'first': 'E', 'middle': ['M'], 'last': 'Cottam', 'suffix': ''} {'first': 'A', 'middle': [], 'last': 'Acevedo', 'suffix': ''} {'first': 'R', 'middle': [], 'last': 'Andino', 'suffix': ''} {'first': 'Y', 'middle': [], 'last': 'Peng', 'suffix': ''} {'first': 'Hcm', 'middle': [], 'last': 'Leung', 'suffix': ''} {'first': 'S', 'middle': ['M'], 'last': 'Yiu', 'suffix': ''} {'first': 'Fyl', 'middle': [], 'last': 'Chin', 'suffix': ''} {'first': 'S', 'middle': ['F'], 'last': 'Altschul', 'suffix': ''} {'first': 'W', 'middle': [], 'last': 'Gish', 'suffix': ''} {'first': 'W', 'middle': [], 'last': 'Miller', 'suffix': ''} {'first': 'E', 'middle': ['W'], 'last': 'Myers', 'suffix': ''} {'first': 'D', 'middle': ['J'], 'last': 'Lipman', 'suffix': ''} {'first': 'E', 'middle': [], 'last': 'Rieder', 'suffix': ''} {'first': 'T', 'middle': [], 'last': 'Bunch', 'suffix': ''} {'first': 'F', 'middle': [], 'last': 'Brown', 'suffix': ''} {'first': 'P', 'middle': ['W'], 'last': 'Mason', 'suffix': ''} {'first': 'N', 'middle': ['J'], 'last': 'Stonehouse', 'suffix': ''} {'first': 'L', 'middle': [], 'last': 'Martin', 'suffix': ''} {'first': 'G', 'middle': [], 'last': 'Duke', 'suffix': ''} {'first': 'J', 'middle': [], 'last': 'Osorio', 'suffix': ''} {'first': 'D', 'middle': [], 'last': 'Hall', 'suffix': ''} {'first': 'A', 'middle': [], 'last': 'Palmenberg', 'suffix': ''} {'first': '3d', 'middle': [], 'last': 'Wt', 'suffix': ''} ------------------------------------------------------------ {'first': 'Hanchu', 'middle': [], 'last': 'Zhou'} {'first': 'Jiannan', 'middle': [], 'last': 'Yang'} {'first': 'Kaicheng', 'middle': [], 'last': 'Tang'} {'first': '†', 'middle': [], 'last': ''} {'first': 'Qingpeng', 'middle': [], 'last': 'Zhang'} {'first': 'Zhidong', 'middle': [], 'last': 'Cao'} {'first': 'Dirk', 'middle': [], 'last': 'Pfeiffer'} {'first': 'Daniel', 'middle': ['Dajun'], 'last': 'Zeng'} {'first': 'C', 'middle': [], 'last': 'Wang', 'suffix': ''} {'first': 'P', 'middle': ['W'], 'last': 'Horby', 'suffix': ''} {'first': 'F', 'middle': ['G'], 'last': 'Hayden', 'suffix': ''} {'first': 'G', 'middle': ['F'], 'last': 'Gao', 'suffix': ''} ------------------------------------------------------------ {'first': 'Salman', 'middle': ['L'], 'last': 'Butt'} {'first': 'Eric', 'middle': ['C'], 'last': 'Erwood'} {'first': 'Jian', 'middle': [], 'last': 'Zhang'} {'first': 'Holly', 'middle': ['S'], 'last': 'Sellers'} {'first': 'Kelsey', 'middle': [], 'last': 'Young'} {'first': 'Kevin', 'middle': ['K'], 'last': 'Lahmers'} {'first': 'James', 'middle': ['B'], 'last': 'Stanton'} {'first': 'S', 'middle': ['H'], 'last': 'Abro', 'suffix': ''} {'first': 'Y', 'middle': ['A'], 'last': 'Bochkov', 'suffix': ''} {'first': 'S', 'middle': ['L'], 'last': 'Butt', 'suffix': ''} {'first': 'S', 'middle': ['A'], 'last': 'Callison', 'suffix': ''} {'first': 'P', 'middle': [], 'last': 'De Herdt', 'suffix': ''} {'first': 'S', 'middle': [], 'last': 'Escutenaire', 'suffix': ''} {'first': '', 'middle': [], 'last': 'Sybr', 'suffix': ''} {'first': 'H', 'middle': [], 'last': 'Ferreira', 'suffix': ''} {'first': 'T', 'middle': [], 'last': 'Hodgson', 'suffix': ''} {'first': 'M', 'middle': ['W'], 'last': 'Jackwood', 'suffix': ''} {'first': 'M', 'middle': ['W'], 'last': 'Jackwood', 'suffix': ''} {'first': 'M', 'middle': ['W'], 'last': 'Jackwood', 'suffix': ''} {'first': 'M', 'middle': [], 'last': 'Jain', 'suffix': ''} {'first': 'M', 'middle': ['A'], 'last': 'Johnson', 'suffix': ''} {'first': 'N', 'middle': ['M'], 'last': 'Kamble', 'suffix': ''} {'first': 'C', 'middle': ['L'], 'last': 'Keeler', 'suffix': ''} {'first': 'Jr', 'middle': [], 'last': '', 'suffix': ''} {'first': 'D', 'middle': [], 'last': 'Kim', 'suffix': ''} {'first': 'B', 'middle': [], 'last': 'Kingham', 'suffix': ''} {'first': 'E', 'middle': ['T'], 'last': 'Mckinley', 'suffix': ''} {'first': 'M', 'middle': ['M'], 'last': 'Naguib', 'suffix': ''} {'first': 'C', 'middle': ['H'], 'last': 'Okino', 'suffix': ''} {'first': 'T', 'middle': [], 'last': 'Pohuang', 'suffix': ''} {'first': 'J', 'middle': [], 'last': 'Quick', 'suffix': ''} {'first': 'J', 'middle': [], 'last': 'Quick', 'suffix': ''} {'first': 'H-J', 'middle': [], 'last': 'Roh', 'suffix': ''} {'first': 'P', 'middle': ['D'], 'last': 'Schloss', 'suffix': ''} {'first': 'W', 'middle': [], 'last': 'Spaan', 'suffix': ''} {'first': 'S', 'middle': ['J'], 'last': 'Spatz', 'suffix': ''} {'first': 'T', 'middle': [], 'last': 'Stenzel', 'suffix': ''} {'first': 'S', 'middle': [], 'last': 'Sutou', 'suffix': ''} {'first': 'K', 'middle': [], 'last': 'Tamura', 'suffix': ''} {'first': 'Z', 'middle': [], 'last': 'Tarnagda', 'suffix': ''} {'first': 'V', 'middle': [], 'last': 'Valastro', 'suffix': ''} {'first': 'C-H', 'middle': [], 'last': 'Wang', 'suffix': ''} {'first': 'J', 'middle': [], 'last': 'Wang', 'suffix': ''} {'first': 'S', 'middle': [], 'last': 'Wei', 'suffix': ''} {'first': 'I', 'middle': ['A'], 'last': 'Wickramasinghe', 'suffix': ''} {'first': 'A', 'middle': ['K'], 'last': 'Williams', 'suffix': ''} {'first': 'D', 'middle': ['E'], 'last': 'Wood', 'suffix': ''} {'first': 'J', 'middle': [], 'last': 'Ye', 'suffix': ''} ------------------------------------------------------------
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Extracting Key Words
paper_json['body_text'] stop_sentences = ['All rights reserved.','No reuse allowed without permission.','Abstract','author/funder','The copyright holder for this preprint (which was not peer-reviewed) is the'] abstract_text = extract_abstract(paper_json['abstract']) body_text = '' for t in paper_json['body_text']: body_text += ' ' + t['text'] total_text = abstract_text + ' ' + body_text for s in stop_sentences: total_text = total_text.replace(s,"") print(total_text) stop_words = set(stopwords.words('english')) punctuation = [',','.',';',':','(',')','′','~'] #only keep nouns ... other_words = set(['two','one','three','bioRxiv','furthermore','word','count','text']) all_words = [] for word in total_text.split(" "): for p in punctuation: word = word.replace(p,"") word = word.strip(" ") try: int(word) except: if (not word.lower() in stop_words) and (word) and (word[:4] != 'http') and (not word.lower() in other_words): print(word) all_words.append(word) 'also'.lower() in stop_words try: print(int('5')) except: print('5′ is text') freq = nltk.FreqDist(all_words) freq.plot(20, cumulative=False) test_word = 'https//' print(test_word[:4]) lines = 'lines is some string of words' # function to test if something is a noun is_noun = lambda pos: pos[:2] == 'NN' # do the nlp stuff tokenized = nltk.word_tokenize(total_text) nouns = [word for (word, pos) in nltk.pos_tag(tokenized) if is_noun(pos)] print(nouns) remove = set(['=','<','*','http','https','doi','biorxiv','preprint','word','count','text']) words = [noun.replace('PKs','pseudoknot').replace('PK','pseudoknot') for noun in nouns if not noun.lower() in remove] freq = nltk.FreqDist(words) freq.plot(20, cumulative=False) freq
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
TEST
paper_info = get_paper_data(cfg['data-path'] + biorxiv, papers[10]) paper_info def get_sections_from_body(body): sections = {} for section in body: if section['section'].isupper(): if section['section'] not in sections: sections[section['section']] = '' else: sections[section['section']] += section['text'] return sections print(sections.keys()) sections txt = 'INTRODUCTION' txt[0] + txt[1:].lower() print('ID: {}'.format(paper_info['paper_id'])) print('\nTitle: {}'.format(paper_info['metadata']['title'])) print('\nAuthors: {}'.format(paper_info['metadata']['authors'])) print('\nAbstract: {}'.format(paper_info['abstract'])) sections = get_sections_from_body(paper_info['body_text']) for section in sections.keys(): print('\n{}: {}'.format(section[0] + section[1:].lower(),sections[section])) sections.keys()
_____no_output_____
MIT
CORD19_Analysis.ipynb
davidcdupuis/CORD-19
Implementing the Gradient Descent AlgorithmIn this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
import matplotlib.pyplot as plt import numpy as np import pandas as pd #Some helper functions for plotting and drawing lines def plot_points(X, y): admitted = X[np.argwhere(y==1)] rejected = X[np.argwhere(y==0)] plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k') plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k') def display(m, b, color='g--'): plt.xlim(-0.05,1.05) plt.ylim(-0.05,1.05) x = np.arange(-10, 10, 0.1) plt.plot(x, m*x+b, color)
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Reading and plotting the data
data = pd.read_csv('data.csv', header=None) X = np.array(data[[0,1]]) y = np.array(data[2]) plot_points(X,y) plt.show()
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
TODO: Implementing the basic functionsHere is your turn to shine. Implement the following formulas, as explained in the text.- Sigmoid activation function$$\sigma(x) = \frac{1}{1+e^{-x}}$$- Output (prediction) formula$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$- Error function$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$- The function that updates the weights$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$$$ b \longrightarrow b + \alpha (y - \hat{y})$$
# Implement the following functions # Activation (sigmoid) function def sigmoid(x): sigmoid_result = 1/(1 + np.exp(-x)) return sigmoid_result # Output (prediction) formula def output_formula(features, weights, bias): x = features.dot(weights) + bias print(x) output= sigmoid(x) return output # Error (log-loss) formula def error_formula(y, output): error= -y*np.log(output) - (1-y)*np.log(1-output) return error # Gradient descent step def update_weights(x, y, weights, bias, learnrate,output): weights = weights + learnrate*(y-output)*x bias = bias + learnrate*(y-output) return weights, bias
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Training functionThis function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
np.random.seed(44) epochs = 100 learnrate = 0.01 def train(features, targets, epochs, learnrate, graph_lines=False): errors = [] n_records, n_features = features.shape last_loss = None weights = np.random.normal(scale=1 / n_features**.5, size=n_features) bias = 0 for e in range(epochs): del_w = np.zeros(weights.shape) for x, y in zip(features, targets): output = output_formula(x, weights, bias) error = error_formula(y, output) weights, bias = update_weights(x, y, weights, bias, learnrate, output) # Printing out the log-loss error on the training set out = output_formula(features, weights, bias) loss = np.mean(error_formula(targets, out)) errors.append(loss) if e % (epochs / 10) == 0: print("\n========== Epoch", e,"==========") if last_loss and last_loss < loss: print("Train loss: ", loss, " WARNING - Loss Increasing") else: print("Train loss: ", loss) last_loss = loss predictions = out > 0.5 accuracy = np.mean(predictions == targets) print("Accuracy: ", accuracy) if graph_lines and e % (epochs / 100) == 0: display(-weights[0]/weights[1], -bias/weights[1]) # Plotting the solution boundary plt.title("Solution boundary") display(-weights[0]/weights[1], -bias/weights[1], 'black') # Plotting the data plot_points(features, targets) plt.show() # Plotting the error plt.title("Error Plot") plt.xlabel('Number of epochs') plt.ylabel('Error') plt.plot(errors) plt.show()
_____no_output_____
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Time to train the algorithm!When we run the function, we'll obtain the following:- 10 updates with the current training loss and accuracy- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.- A plot of the error function. Notice how it decreases as we go through more epochs.
train(X, y, epochs, learnrate, True)
-0.473530635888 0.125936869166 -0.0361573775337 0.256515978501 0.0843418004565 -0.0179345755974 0.198924764205 0.10068384058 0.270812660579 0.158779604536 0.172757003252 0.306288885566 0.328800256305 0.209874987271 0.267438947578 0.047722264903 0.0292851983816 0.156739365126 0.309723821939 0.299274606528 0.276005815744 0.429675232057 0.2921510035 0.364606600253 0.12578919906 0.641702400365 0.599422022024 0.310737562666 0.0183354643532 0.148554925344 0.40003471251 -0.0203031863132 0.579938469388 0.472752585787 0.368941323533 0.333232257115 0.29846769527 0.438832443683 0.237952131937 0.202982388536 0.387709399587 0.56040728227 0.155229259389 0.606795024705 0.20609247896 0.343010668522 0.428619019042 0.595505110161 0.29566334023 0.351850767654 0.968547748808 0.596693804223 0.467408683354 0.771046922899 0.344475697379 0.463224373188 0.618007227011 0.60095801005 0.341550583778 0.543606123802 0.425673558661 0.458713976336 0.375257985976 0.481305097256 0.254997990576 0.468541619801 0.383089506323 0.373376923178 0.0186890053455 0.385087806512 0.601638033062 0.268082477521 0.555913968534 0.314671465567 0.223379432124 0.199415164363 0.14169840636 0.41587011179 0.0923171037297 0.333822566489 0.356745630609 0.271178869636 0.252357427055 0.214304878022 0.141604032251 0.174584719612 0.105140382844 0.327171296831 0.074388854521 0.264789650244 0.371034223554 0.0569070316703 -0.0052387740715 0.313582897051 0.110410491228 0.0391078125133 0.0664726601423 0.127553402849 0.0831172973566 0.0969785354206 [ -6.29819655e-01 -2.63025471e-02 -1.93737208e-01 7.46770741e-02 -1.21470889e-01 -1.66141739e-01 1.43795244e-02 -1.08465866e-01 5.30446664e-02 -2.48106685e-02 3.17494051e-02 1.15548463e-01 1.01607381e-01 -4.76537775e-02 -1.37588773e-03 -1.85645495e-01 -2.22802812e-01 -7.83346024e-02 2.46678539e-02 2.69746444e-02 -1.52787841e-02 1.30838821e-01 -3.23374993e-02 1.18497425e-01 -2.28512830e-01 2.90368731e-01 2.87541848e-01 4.29448844e-02 -2.43245807e-01 -1.92591706e-01 9.56979268e-02 -3.05473310e-01 2.29936236e-01 6.79888444e-02 3.86629021e-02 1.55369122e-02 -1.36492026e-01 4.90026900e-02 -1.60222840e-01 -1.78928397e-01 -3.25472305e-03 8.36673503e-02 -1.84895779e-01 1.45780988e-01 -8.97790553e-02 2.07935440e-03 -1.86998750e-02 1.59043121e-01 -8.17021237e-02 -7.55938177e-02 3.66316870e-01 1.12773525e-01 -9.79299953e-02 2.26280888e-01 -1.30001679e-01 -1.08975454e-01 1.00535989e-01 1.12326147e-01 -1.19591113e-01 7.55739965e-02 3.98754548e-02 7.09734117e-03 -2.71394773e-02 7.50260694e-02 -1.34790175e-01 7.47713499e-02 3.54724500e-02 5.59102833e-02 -2.97415333e-01 2.32670724e-02 2.39455842e-01 -3.58476379e-02 2.31095262e-01 3.12442782e-02 -3.80008483e-02 -8.92098732e-02 -1.23435009e-01 1.66219811e-01 -1.38018776e-01 1.16204273e-01 1.49548838e-01 7.05827384e-02 5.77996283e-02 4.57429494e-02 6.04940733e-04 5.44302360e-03 -3.12561635e-02 1.79320701e-01 -5.38368753e-02 1.42009954e-01 2.55577983e-01 -3.52309913e-02 -8.69449845e-02 2.38504177e-01 4.72430872e-02 -1.29085986e-02 2.53634507e-02 1.01256890e-01 5.98984355e-02 8.82850390e-02] ========== Epoch 0 ========== Train loss: 0.713584519538 Accuracy: 0.4 -0.629819654976 -0.0184338122198 -0.179313550436 0.0951114405453 -0.0930968080136 -0.134932071696 0.0528114248976 -0.0618430452006 0.105590092609 0.0297985751817 0.0857267880583 0.179509149355 0.174882696184 0.0354849550038 0.0890699025589 -0.0944314591101 -0.122780365474 0.0232879023597 0.139888980046 0.144736913504 0.111488518259 0.263324115837 0.111694531704 0.248069527403 -0.0666675846925 0.454386195778 0.445701356676 0.194290715094 -0.0885283379732 -0.0114577566297 0.269320121542 -0.131432843086 0.42420661068 0.283979209145 0.236043608697 0.212675060952 0.10331503597 0.277917720841 0.0769103721314 0.0571662995772 0.239019222566 0.360325381436 0.0474606551033 0.424648960694 0.13152917112 0.24303154775 0.266606784839 0.442656467616 0.183401749151 0.212785375748 0.726188688638 0.419340763535 0.236206732275 0.542932863774 0.156655749121 0.20540157899 0.386288664721 0.380427735593 0.134072789046 0.322321216279 0.257044647488 0.234087003013 0.180036870363 0.274427933022 0.0542751782784 0.254723884086 0.199550548429 0.208105820206 -0.15054165416 0.164043420241 0.367431672752 0.0798034200086 0.33427900724 0.124791808029 0.0474378935678 -0.0182828356415 -0.061421890029 0.217659004942 -0.0939121244667 0.151024513957 0.174033928519 0.0830011625221 0.05711336408 0.0447589844371 0.00520079709849 -0.028324503337 -0.0546317556456 0.124125655066 -0.110775953639 0.0673608487121 0.160344492086 -0.121238648149 -0.18077253326 0.125672753419 -0.0764708952619 -0.145709671734 -0.115449241037 -0.0105596969488 -0.124638556733 -0.0514942262735 [ -7.27336470e-01 -1.15551303e-01 -2.82074004e-01 -2.93023680e-02 -2.38650477e-01 -2.34328937e-01 -7.81541814e-02 -2.14612509e-01 -5.59735632e-02 -1.04400881e-01 -1.37579685e-02 3.70824097e-02 5.65862892e-04 -1.65702127e-01 -1.23263863e-01 -2.77880413e-01 -3.23530920e-01 -1.64269215e-01 -9.18378864e-02 -7.73251570e-02 -1.28202530e-01 1.57471733e-02 -1.59396280e-01 4.39706145e-02 -3.66132612e-01 1.55734005e-01 1.80460849e-01 -3.29598882e-02 -3.10967859e-01 -3.05211899e-01 6.84211599e-03 -3.77637286e-01 1.19300071e-01 -7.06605600e-02 -5.34116927e-02 -6.64125574e-02 -2.81526738e-01 -6.79658987e-02 -2.77473869e-01 -2.83702695e-01 -1.11083302e-01 -6.83981778e-02 -2.58586106e-01 7.97377301e-03 -1.35888886e-01 -6.60334243e-02 -1.40077910e-01 4.48362867e-02 -1.61219505e-01 -1.78388040e-01 1.73895740e-01 -2.43060753e-02 -2.81784867e-01 4.36604986e-02 -2.78086478e-01 -3.18765313e-01 -8.80406638e-02 -6.74784577e-02 -2.88586748e-01 -1.06704988e-01 -9.66340010e-02 -1.79714288e-01 -1.88743364e-01 -9.78768730e-02 -3.02871730e-01 -1.06130498e-01 -1.19029212e-01 -8.27764167e-02 -4.40005533e-01 -1.67270095e-01 3.56889273e-02 -1.98607904e-01 3.67527410e-02 -1.34810186e-01 -1.91942159e-01 -2.82529321e-01 -3.04338165e-01 -1.12063437e-02 -3.05028526e-01 -4.84991314e-02 -1.58711258e-02 -1.00768370e-01 -1.21101717e-01 -1.09676575e-01 -1.23929679e-01 -1.83007918e-01 -1.79457803e-01 -1.10856293e-02 -2.28046619e-01 -4.49712841e-02 5.46944182e-02 -2.05582171e-01 -2.55585135e-01 5.68901434e-02 -1.34293921e-01 -1.93366997e-01 -1.53185158e-01 -3.47612656e-02 -1.45843507e-01 -5.95725351e-02] -0.727336470424 -0.107419991776 -0.267106102046 -0.00807938381459 -0.209089605452 -0.201732929898 -0.0380856454825 -0.165983799655 -0.00111163959838 -0.0473219640467 0.0426285164914 0.10379626832 0.0769938473285 -0.078924399174 -0.0287575984773 -0.182489106043 -0.21895081778 -0.0580211024574 0.0286224722101 0.0458722840811 0.00445999600281 0.154466683974 -0.00850037923842 0.179787012916 -0.196516091821 0.327768092439 0.346495063949 0.125924184604 -0.148639660476 -0.11523644814 0.188959078292 -0.195159071749 0.322985322284 0.155883084206 0.153697192447 0.140408191772 -0.029921319282 0.172347547953 -0.0284980996905 -0.0358090011679 0.143313576585 0.222211910791 -0.0144985291737 0.300995174879 0.0965587931198 0.18696590607 0.159538470884 0.342749908625 0.117179055886 0.124444892107 0.552111642246 0.299086811668 0.0719132295905 0.380542495834 0.0279392491967 0.0188952373649 0.220824149859 0.224070094599 -0.0112461714023 0.165187543352 0.143319644216 0.0741010203602 0.0442759071833 0.128771779745 -0.0860665012545 0.103107686726 0.0729342376196 0.0964760937676 -0.264988270209 0.00608232051312 0.198113149283 -0.0511323177412 0.175167784569 -0.00777819983771 -0.0735688803634 -0.173982258103 -0.205105232446 0.0774861336031 -0.223775262987 0.0235130761861 0.0464398389445 -0.049129530454 -0.0809969828666 -0.0720412951381 -0.0840096252384 -0.173084754714 -0.163484142385 -0.0212568263981 -0.241006678678 -0.0735297156616 0.00807913780896 -0.245986913698 -0.303382851514 -0.00789516214271 -0.209013947284 -0.276564622455 -0.244064218438 -0.102664470609 -0.275212323151 -0.152488565 [-0.78459375 -0.16673364 -0.33224898 -0.09300351 -0.31315653 -0.26761876 -0.1321544 -0.27987003 -0.12387592 -0.14747254 -0.0282828 -0.00524732 -0.06070635 -0.24104431 -0.2018979 -0.33134134 -0.38407226 -0.21259279 -0.16598858 -0.14122663 -0.1992869 -0.05736877 -0.24232898 0.00494546 -0.45765339 0.06597262 0.11384768 -0.07303358 -0.34375583 -0.37578571 -0.04418602 -0.41406662 0.04978373 -0.16347358 -0.10706128 -0.11152407 -0.37940844 -0.14254045 -0.35198407 -0.34770593 -0.17790524 -0.17250286 -0.29647132 -0.08424397 -0.15074459 -0.09950322 -0.21825864 -0.0275771 -0.20414799 -0.24087703 0.03548639 -0.11586507 -0.41232968 -0.08631473 -0.37853949 -0.47107866 -0.22283768 -0.19492427 -0.40661912 -0.23617621 -0.18760902 -0.31289406 -0.3007002 -0.21947177 -0.4201144 -0.23444382 -0.22511068 -0.17560335 -0.53560197 -0.3036025 -0.11206761 -0.3115235 -0.10307559 -0.2505895 -0.29744705 -0.42103661 -0.43236688 -0.13673203 -0.42136626 -0.16326607 -0.13128801 -0.22105338 -0.24771023 -0.21654382 -0.20478966 -0.31756227 -0.2801513 -0.14753308 -0.35055343 -0.178488 -0.09066345 -0.32487475 -0.37336597 -0.0722574 -0.26310107 -0.32118145 -0.27945063 -0.12541208 -0.29500017 -0.16014934] -0.784593747289 -0.158452194424 -0.31696994185 -0.0713319199481 -0.282905709167 -0.234200276034 -0.0911342740436 -0.230074486375 -0.0676560465367 -0.0889350873662 0.0295151380555 0.0630497757555 0.0175294379745 -0.15216615243 -0.105023391287 -0.233493020694 -0.276813522197 -0.103625692341 -0.0424510193138 -0.0148195490302 -0.0631338327645 0.0850566142351 -0.0873333606055 0.14450496069 -0.283383096188 0.242837382015 0.284656502824 0.090419999968 -0.176833268152 -0.180483773645 0.143051317061 -0.226515971612 0.259124747259 0.0694299265508 0.105929547961 0.101142157523 -0.120672475895 0.10469251231 -0.0958071629181 -0.0926319462158 0.0838723505714 0.126625422769 -0.0452129712835 0.217442595858 0.0885050014057 0.160832699467 0.0900830736329 0.279070320769 0.0823568394361 0.0707683331729 0.42495002496 0.217920774085 -0.046487989712 0.263225507858 -0.0603480038643 -0.118696617558 0.100747824269 0.11163673207 -0.114067969066 0.0519387546261 0.0670482228619 -0.0416836981595 -0.0508910600125 0.024930606501 -0.185197814266 -0.00602749199625 -0.0148719864412 0.0213798821623 -0.342133531714 -0.108806045279 0.0731192891355 -0.143002529016 0.0587199826905 -0.101319522517 -0.157197709436 -0.28742128375 -0.308305448847 -0.0231552956273 -0.315287914308 -0.0663787359787 -0.0436536282258 -0.143125849065 -0.180225464817 -0.152996962821 -0.141216900584 -0.278266645553 -0.237759018743 -0.127387172976 -0.33389067312 -0.175952261635 -0.104412672576 -0.334459810517 -0.390045656059 -0.104514063714 -0.304625252069 -0.370749853177 -0.336457788419 -0.163585636226 -0.386881095586 -0.221282171229 [-0.81467557 -0.19222454 -0.35667539 -0.12947917 -0.35879197 -0.27741475 -0.16013823 -0.31748954 -0.16397506 -0.16592248 -0.02200911 -0.02321211 -0.09510344 -0.28748944 -0.25125387 -0.35863017 -0.41746702 -0.2355424 -0.2114811 -0.17782217 -0.24207159 -0.10208314 -0.29538246 -0.01015326 -0.51794171 0.00662391 0.07460218 -0.08895712 -0.35302867 -0.41792609 -0.0696813 -0.42643078 0.00808264 -0.22522003 -0.13476817 -0.13179098 -0.44532435 -0.18842858 -0.39757919 -0.38415645 -0.21700242 -0.24407184 -0.31023633 -0.14556171 -0.14462045 -0.10964615 -0.26720136 -0.07171339 -0.22240911 -0.27612983 -0.06619034 -0.17657339 -0.5066513 -0.18050923 -0.44669549 -0.58428977 -0.32107651 -0.28679327 -0.49005139 -0.32976226 -0.24772763 -0.4096227 -0.37896176 -0.30621804 -0.50284373 -0.32702353 -0.29834153 -0.2373482 -0.59935115 -0.40308695 -0.22171688 -0.39060771 -0.20583119 -0.33223637 -0.37009396 -0.52228207 -0.52447481 -0.22699483 -0.50330617 -0.24413017 -0.21275427 -0.30665721 -0.33879038 -0.29046861 -0.25608069 -0.41548261 -0.34862803 -0.2472943 -0.43794548 -0.27566212 -0.19824819 -0.40949691 -0.45661683 -0.16574707 -0.35607839 -0.41322861 -0.37019615 -0.18531877 -0.40566169 -0.22866024] -0.814675571419 -0.183865476645 -0.341238533383 -0.107583605888 -0.328181489145 -0.243549072267 -0.1186228609 -0.267086968177 -0.107037344073 -0.106602977722 0.0365322894414 0.0458845906609 -0.0159614972914 -0.197543401873 -0.153147533028 -0.25948078353 -0.308791832756 -0.125136809097 -0.0863187656643 -0.0497018382835 -0.104045708983 0.0423459997676 -0.138149603675 0.131462829439 -0.341119450933 0.186170581691 0.248090635986 0.0770585714661 -0.183550354109 -0.219673133382 0.120395253921 -0.236081675614 0.22053755834 0.0112061497349 0.0815003349374 0.0841222948917 -0.182618622788 0.062689477892 -0.137345988561 -0.125031461865 0.0489422473587 0.0598906714958 -0.0549001343987 0.161065666046 0.0984872476428 0.154829042444 0.0460746704772 0.239886949081 0.068677972282 0.0404960738135 0.329696406805 0.163222927921 -0.133722288652 0.176521706695 -0.121257347593 -0.223037335659 0.0114892694767 0.0290046531353 -0.188089006844 -0.0315397083719 0.0160950206865 -0.127484149603 -0.118571306664 -0.0505741669904 -0.256420987446 -0.0863641803986 -0.076421431692 -0.0290325326658 -0.394078328376 -0.194483373391 -0.0217881500546 -0.208440460117 -0.0287934591083 -0.168451930523 -0.21556087113 -0.372217014852 -0.384089957429 -0.0970234449855 -0.380857051411 -0.130818225167 -0.108374513432 -0.211313215843 -0.253126154952 -0.209725624628 -0.17685616188 -0.356633135175 -0.288679508059 -0.206913857651 -0.401499201259 -0.252310243072 -0.189928757522 -0.398392273351 -0.452393070674 -0.176105404139 -0.375227357188 -0.440106037802 -0.404326835156 -0.2034685721 -0.472155490003 -0.268326100343 [-0.82652074 -0.2004837 -0.36383729 -0.14765462 -0.38499382 -0.27150947 -0.1706632 -0.33653012 -0.18537403 -0.16788281 -0.00189608 -0.02485971 -0.11144232 -0.31447963 -0.28088863 -0.3683603 -0.43262851 -0.24148267 -0.23768143 -0.19606364 -0.26581446 -0.127679 -0.32829891 -0.00923889 -0.55716214 -0.03220448 0.05376426 -0.08871546 -0.3465886 -0.44093913 -0.07804988 -0.42270297 -0.01490269 -0.26600157 -0.14506634 -0.13541199 -0.48965983 -0.21500396 -0.42371145 -0.40209012 -0.23745617 -0.29365596 -0.30786623 -0.18602691 -0.12453614 -0.10419677 -0.29645138 -0.0968164 -0.2241513 -0.29308116 -0.14295875 -0.21646476 -0.57643704 -0.25046159 -0.49304096 -0.67096796 -0.39453893 -0.35456627 -0.55007481 -0.39903987 -0.2870283 -0.48165303 -0.43443846 -0.36937506 -0.56222503 -0.39539999 -0.34937097 -0.27811786 -0.64160872 -0.47759711 -0.30550929 -0.44681296 -0.28344802 -0.39079267 -0.42053737 -0.59827068 -0.59225773 -0.29337724 -0.56197831 -0.30205953 -0.27125126 -0.36878815 -0.40581038 -0.34212787 -0.28744863 -0.48857782 -0.39534588 -0.32218705 -0.50156889 -0.34820772 -0.28020723 -0.47065796 -0.51650616 -0.23507932 -0.42478681 -0.48105273 -0.43688868 -0.22448288 -0.49020457 -0.27551203] -0.826520739 -0.192094323252 -0.348343744125 -0.125683651867 -0.354242879392 -0.237445709351 -0.128955750853 -0.285892735059 -0.128144268525 -0.108231726066 0.0569437612232 0.044514576223 -0.0319951623009 -0.224154337971 -0.18230864671 -0.268681315322 -0.323379095066 -0.130492959655 -0.111863602486 -0.0672291814421 -0.126996284276 0.0176165153212 -0.170074042878 0.133305752918 -0.379193579275 0.148586918457 0.228529269452 0.0785173869914 -0.175919981655 -0.241326200802 0.113337708522 -0.23107835344 0.198962965385 -0.0279544892183 0.0727333248441 0.0820055947936 -0.225104136635 0.0379635706869 -0.161531889622 -0.141013555584 0.0304992348588 0.0126681743623 -0.050528713793 0.123039986429 0.120452351766 0.162270137763 0.0192124307006 0.217196253482 0.0691500260201 0.025950273452 0.256107071063 0.126390889989 -0.19982696626 0.11057624394 -0.1636742865 -0.304793515849 -0.0568689464259 -0.0334337675833 -0.242620710172 -0.0948436081271 -0.017784445563 -0.192960136678 -0.167666092603 -0.106895392356 -0.308764870212 -0.147190170307 -0.120231198087 -0.0627988620149 -0.429027732885 -0.260353797671 -0.0962677584073 -0.256008121042 -0.0966789826375 -0.217723974747 -0.256861054743 -0.437591896117 -0.441307248218 -0.152758468091 -0.428880843203 -0.178042637619 -0.155932360607 -0.262033086832 -0.308194319482 -0.250086974814 -0.197985338319 -0.41681908554 -0.323836571486 -0.268394533278 -0.451995893777 -0.310993451465 -0.25712522567 -0.445716926454 -0.498288104733 -0.230730268056 -0.428879692298 -0.492636756778 -0.455578124643 -0.22917046825 -0.539493302104 -0.300684234958 [-0.82616615 -0.19722674 -0.35946593 -0.15356108 -0.39813868 -0.2551663 -0.16951134 -0.34311265 -0.19422426 -0.15884742 0.02735597 -0.0156276 -0.11568097 -0.32839525 -0.29726048 -0.36635058 -0.43557836 -0.2360646 -0.25091883 -0.20200001 -0.27677139 -0.14043003 -0.34766193 0.00234201 -0.58218316 -0.05719929 0.04527808 -0.07770323 -0.32970519 -0.45111245 -0.07497173 -0.40826814 -0.02532215 -0.29264553 -0.14372199 -0.12792626 -0.51943322 -0.22860111 -0.43676756 -0.40761172 -0.24540295 -0.32838665 -0.29475465 -0.21243062 -0.09523234 -0.08838003 -0.31245908 -0.10913333 -0.2148792 -0.29776785 -0.20281358 -0.24232074 -0.62958665 -0.30397246 -0.52466276 -0.73960982 -0.45118956 -0.40600427 -0.59425326 -0.45183476 -0.31229545 -0.53692962 -0.47450457 -0.41655376 -0.60580453 -0.44736754 -0.38539682 -0.30474315 -0.66937213 -0.53515942 -0.37172732 -0.48754195 -0.34399439 -0.43372193 -0.4559783 -0.65711723 -0.6435533 -0.34357396 -0.60490506 -0.34446816 -0.31420214 -0.41502254 -0.45652243 -0.37873819 -0.30541201 -0.54483041 -0.42737281 -0.38020071 -0.54909271 -0.4040438 -0.34475344 -0.51593444 -0.56058255 -0.28802907 -0.47704125 -0.53245713 -0.48727941 -0.24966418 -0.55699576 -0.30773869] -0.826166148351 -0.188838269006 -0.343982799729 -0.131613387357 -0.367393138175 -0.221070215534 -0.127813577651 -0.292488268295 -0.136986007203 -0.099164930263 0.0861976835832 0.0536766563969 -0.0363288194511 -0.238149554954 -0.198712168776 -0.266656319413 -0.326316194542 -0.125060325131 -0.125091708325 -0.073117763424 -0.137881700984 0.00497295491558 -0.189276277289 0.145062035342 -0.404006750779 0.123877302449 0.220382059418 0.0898477970665 -0.158758677895 -0.251202722867 0.116705646413 -0.216385568833 0.188815985277 -0.054248949401 0.0744408397268 0.0898301617149 -0.254445177847 0.0248543004003 -0.174053074264 -0.145988012538 0.0231205981756 -0.0213554384019 -0.0368056280859 0.0974013362779 0.150313192548 0.178641103988 0.00388790299289 0.205590621745 0.0790509243957 0.0219450456152 0.19725837072 0.10161512505 -0.251580701585 0.0587336761997 -0.193595975625 -0.371166152594 -0.111020960969 -0.0821618372331 -0.283944370315 -0.144446632008 -0.0401501007204 -0.244626953704 -0.204177642271 -0.150205145436 -0.348316514007 -0.194767969386 -0.152042710095 -0.0853367261531 -0.452511125185 -0.312757462297 -0.156831491684 -0.291475160719 -0.151209192355 -0.254895236204 -0.286624211733 -0.489759708755 -0.485919021343 -0.196182635231 -0.465017067633 -0.213602243804 -0.191858614726 -0.300905427225 -0.351153586516 -0.27937587042 -0.209358404271 -0.464642384187 -0.348344033532 -0.317595958354 -0.490880850232 -0.357655495115 -0.311835538276 -0.481777972473 -0.533028342407 -0.273821201248 -0.471013036243 -0.533735339238 -0.495540291208 -0.245312199451 -0.59459576894 -0.323117543979 [-0.81767815 -0.18630443 -0.3474223 -0.15126258 -0.40252332 -0.23193042 -0.16057834 -0.34136129 -0.1946709 -0.14251743 0.0625815 0.00082089 -0.11183407 -0.33353582 -0.30472162 -0.35652114 -0.43037349 -0.22309525 -0.25545842 -0.19970721 -0.27915787 -0.14456403 -0.3579082 0.02098749 -0.59763337 -0.07286758 0.04506253 -0.05955468 -0.30592762 -0.45268258 -0.06427383 -0.38675326 -0.02732024 -0.30975339 -0.13462018 -0.11306556 -0.53937449 -0.23348878 -0.44105101 -0.40483444 -0.24497767 -0.35307069 -0.27453465 -0.22934999 -0.05990154 -0.06571552 -0.31957129 -0.11287423 -0.19830105 -0.29425743 -0.25114497 -0.25871202 -0.67142515 -0.34630068 -0.5463371 -0.79594329 -0.49639769 -0.44633908 -0.62768498 -0.4934224 -0.32810151 -0.58080801 -0.50413054 -0.45288461 -0.63866857 -0.48818043 -0.41127055 -0.32182789 -0.68735688 -0.58118471 -0.4259551 -0.51778427 -0.39290981 -0.466055 -0.4812702 -0.70429193 -0.68364456 -0.38277219 -0.63715664 -0.37635366 -0.34661077 -0.45046747 -0.4961523 -0.40516376 -0.31436368 -0.58962151 -0.4494725 -0.42672141 -0.58568631 -0.44850893 -0.39742401 -0.55043332 -0.59393402 -0.32983783 -0.51810979 -0.57270194 -0.52659337 -0.2654186 -0.61167605 -0.33008122] -0.817678148271 -0.17793767123 -0.331994285564 -0.129403926683 -0.371880440748 -0.197912557548 -0.119024752397 -0.290915093797 -0.137613173617 -0.0830036978165 0.121227406085 0.0698238009752 -0.0328431752417 -0.24367510201 -0.206541433935 -0.257154189556 -0.321472616044 -0.112455810455 -0.130052478751 -0.071221197318 -0.140677075784 0.00044048433473 -0.199916121877 0.163380533387 -0.419874961947 0.107854267415 0.219879715856 0.107714479425 -0.135312031746 -0.253185475435 0.127011917248 -0.195291553644 0.18633096179 -0.0718558080372 0.0831260890181 0.104251751666 -0.274900383925 0.0195454348999 -0.178744109637 -0.143601421708 0.0231506469634 -0.046437549812 -0.0169026394825 0.0801282258183 0.185316594142 0.200899761813 -0.00367958458266 0.201424785371 0.0951988865726 0.0249853791781 0.14848105538 0.0849801587394 -0.293552745009 0.0165075958758 -0.215063321402 -0.427008800476 -0.155478430749 -0.12154850183 -0.316291946323 -0.184710924743 -0.0547471737954 -0.286874412606 -0.23214941689 -0.184661646217 -0.379176361157 -0.233316886007 -0.175723489005 -0.100295171644 -0.468252511107 -0.355966212637 -0.207867425332 -0.318727991694 -0.196610322504 -0.283845247606 -0.30857223939 -0.532906553411 -0.521941620591 -0.231218894119 -0.493077270255 -0.241236643408 -0.219880238921 -0.331716872572 -0.385860674352 -0.301159758223 -0.214177770782 -0.504023769475 -0.365647584637 -0.358405092104 -0.521860810891 -0.396107341719 -0.357992391089 -0.510177406168 -0.560184584643 -0.309040886951 -0.505288743384 -0.56703811148 -0.527806401175 -0.255009021715 -0.641308266758 -0.33883647902 [-0.80379715 -0.17031208 -0.33030843 -0.14349868 -0.40104441 -0.20419048 -0.14648991 -0.33405607 -0.18950831 -0.12138699 0.10164831 0.02201699 -0.10260781 -0.3327999 -0.30620628 -0.34151399 -0.41974837 -0.20514033 -0.25417561 -0.19193277 -0.27581574 -0.14293132 -0.36202904 0.04427028 -0.60663359 -0.08224856 0.0503663 -0.03671883 -0.27764715 -0.44850544 -0.04853551 -0.36060207 -0.02369105 -0.32042793 -0.12037947 -0.09334479 -0.55267306 -0.23254489 -0.43946298 -0.39653077 -0.23896772 -0.37094966 -0.24965423 -0.23987143 -0.02069407 -0.03857478 -0.32071858 -0.11087741 -0.17691577 -0.28529167 -0.2915891 -0.26872061 -0.70554423 -0.38099354 -0.56128467 -0.84383237 -0.53378509 -0.47909961 -0.65380837 -0.52736111 -0.33752964 -0.61690059 -0.52666846 -0.48182794 -0.66424744 -0.52138269 -0.43026389 -0.33247642 -0.6987425 -0.61932277 -0.47195978 -0.54090497 -0.43386369 -0.49118484 -0.4996861 -0.74348458 -0.71609482 -0.41447072 -0.66215245 -0.40108662 -0.37185196 -0.47856749 -0.52822472 -0.42468492 -0.31726535 -0.62658084 -0.46485729 -0.46538241 -0.61483623 -0.48520418 -0.44195435 -0.57759898 -0.61999207 -0.36404116 -0.55154577 -0.60533507 -0.55835477 -0.27481818 -0.65805067 -0.34573674] -0.803797153197 -0.161980984995 -0.314965360409 -0.121772893741 -0.370569015555 -0.170324534462 -0.105170041881 -0.283897980151 -0.132756928406 -0.0621753004289 0.159966894408 0.0905651791407 -0.0241546296487 -0.243526658433 -0.208617472461 -0.242701826994 -0.311457168984 -0.0951176793218 -0.129476717063 -0.0641379726017 -0.138063053991 0.00133846156514 -0.20479916622 0.186002241784 -0.429709317206 0.0976929845042 0.22448025709 0.129869139873 -0.10776742739 -0.249892758261 0.141905423488 -0.170012572245 0.188968338606 -0.0835936801607 0.096431502857 0.12301596163 -0.289342369779 0.0194632686615 -0.178190945187 -0.136312668535 0.0281241504657 -0.0654503826461 0.0070424404666 0.0685079362957 0.223607781184 0.226995851947 -0.00603967746712 0.202240520771 0.115449343865 0.0327148746 0.106624293324 0.0738454760607 -0.328825452653 -0.0191286363017 -0.230801293641 -0.47559596447 -0.193284990185 -0.154540824177 -0.34251766162 -0.218578985298 -0.0641009973519 -0.322663673721 -0.254308431458 -0.213069645661 -0.404109978923 -0.265683169188 -0.193881900385 -0.110134881466 -0.478763083599 -0.392862134072 -0.252336509932 -0.340388030813 -0.235733910349 -0.307190896214 -0.325215465405 -0.569856842018 -0.552084896824 -0.260514328962 -0.515633050691 -0.263469109194 -0.242512048255 -0.357022640856 -0.414918347801 -0.317845781583 -0.214603605439 -0.537609671326 -0.378072319087 -0.393445985662 -0.547437827972 -0.428921949547 -0.398251384681 -0.533346855153 -0.582167527315 -0.338862656554 -0.534179248985 -0.595000752988 -0.554803109235 -0.260364118625 -0.682228757326 -0.350009092425 [-0.7863724 -0.1510009 -0.30988001 -0.1321184 -0.39565734 -0.17355753 -0.12901789 -0.32307331 -0.18062263 -0.09713875 0.14311902 0.04629519 -0.08982865 -0.32814423 -0.30369537 -0.32311191 -0.40554839 -0.18393094 -0.24901148 -0.18053109 -0.26866329 -0.13745602 -0.36204405 0.07055297 -0.6112911 -0.08739444 0.0593323 -0.01084778 -0.24647659 -0.44050873 -0.02949724 -0.33146293 -0.01632064 -0.32676435 -0.10276684 -0.07046068 -0.56148275 -0.22771217 -0.43396191 -0.38457194 -0.22925449 -0.38421282 -0.2217647 -0.24607909 0.02094049 -0.00855733 -0.3178794 -0.10505888 -0.15240922 -0.272721 -0.3266028 -0.27442767 -0.73437034 -0.41044753 -0.57168049 -0.88588804 -0.56579871 -0.50666985 -0.67494596 -0.55605485 -0.34266169 -0.6476479 -0.54438245 -0.50572127 -0.68485818 -0.54936855 -0.44458651 -0.338785 -0.70567596 -0.65203948 -0.5122869 -0.55917691 -0.4693353 -0.51140329 -0.5134365 -0.77718815 -0.74331126 -0.44103316 -0.68220214 -0.42094375 -0.39220535 -0.50164939 -0.55512086 -0.43951717 -0.31611659 -0.65816066 -0.47569656 -0.49863848 -0.63889766 -0.51656246 -0.48086853 -0.59975804 -0.64107457 -0.39302763 -0.57974972 -0.63275333 -0.58494437 -0.27993729 -0.6986909 -0.35686438] -0.786372399042 -0.142714837553 -0.29464171822 -0.110554567726 -0.365392450877 -0.139892467613 -0.0879910438446 -0.273276098699 -0.124260906395 -0.0383173894021 0.201023322004 0.114287790942 -0.0120290823338 -0.239591602733 -0.206844931389 -0.225004466383 -0.298030628872 -0.0746912092735 -0.125208081543 -0.053622093379 -0.131849346302 0.00585706865193 -0.205819886138 0.211403123683 -0.435475802262 0.0914856146148 0.232467624725 0.154794950635 -0.0776002731285 -0.243092162536 0.159799562708 -0.142043641315 0.195013662347 -0.0913661617176 0.112766072814 0.144601731824 -0.299711314028 0.0228701403166 -0.174139476275 -0.125781527574 0.0363769208487 -0.0803341715512 0.0335874950601 0.0607083487363 0.263936527396 0.255546564197 -0.00491382446055 0.206378275168 0.138355543883 0.0435431192152 0.0695587282008 0.0664280301432 -0.359481392741 -0.0502200314264 -0.242650144566 -0.51914062462 -0.226497304685 -0.183129999144 -0.364549677404 -0.248039022947 -0.0699168636167 -0.353995658767 -0.27249686316 -0.237324160721 -0.424985779188 -0.293790162766 -0.208279953346 -0.116517730666 -0.485738900974 -0.425393162644 -0.292240390811 -0.358226463969 -0.27050748939 -0.326700447288 -0.33824989776 -0.602519821244 -0.578180526211 -0.285858272715 -0.53442246807 -0.282005108771 -0.261453848768 -0.378550013098 -0.440086269645 -0.331060870827 -0.212095172031 -0.567189656641 -0.387189698027 -0.424493326968 -0.569303753504 -0.457839556465 -0.434409070679 -0.552930703261 -0.600607452983 -0.36495961098 -0.559357077158 -0.61928453116 -0.578172129827 -0.262799448174 -0.719115661881 -0.358101692806 [-0.76665322 -0.12955374 -0.28732289 -0.11837114 -0.3876838 -0.14111907 -0.10935918 -0.30968096 -0.16928851 -0.07090894 0.18602417 0.07253088 -0.07473064 -0.32089142 -0.29852799 -0.30251918 -0.38902043 -0.16063627 -0.24127803 -0.16675525 -0.258997 -0.12943862 -0.35931861 0.09873007 -0.61303083 -0.08969287 0.07070554 0.01694299 -0.21350459 -0.42999531 -0.00833459 -0.30044866 -0.00648364 -0.33017946 -0.08297591 -0.04555907 -0.56726019 -0.2203038 -0.42587154 -0.37022237 -0.21710937 -0.39434098 -0.19198089 -0.2493823 0.06402443 0.02325711 -0.31239114 -0.09671362 -0.12591979 -0.25779574 -0.35784892 -0.27724046 -0.75954552 -0.43628446 -0.5789956 -0.92387785 -0.5940946 -0.53066307 -0.69266913 -0.58113041 -0.34490541 -0.67470162 -0.55880418 -0.52614623 -0.70206846 -0.5737582 -0.4557331 -0.34217115 -0.70960921 -0.68100378 -0.5486596 -0.57413774 -0.50100274 -0.52826091 -0.5240166 -0.80708993 -0.76692284 -0.46405892 -0.69886835 -0.43746533 -0.4092132 -0.52128753 -0.57845203 -0.45115917 -0.31226914 -0.68602062 -0.48345756 -0.52815102 -0.65946422 -0.54423032 -0.51587507 -0.61848471 -0.65874976 -0.41841366 -0.60434612 -0.65657865 -0.60797322 -0.28217859 -0.73533732 -0.36492446] -0.766653219268 -0.121318996814 -0.272202455524 -0.096988387658 -0.35765769445 -0.107687148883 -0.0686642047691 -0.260292240352 -0.113371015826 -0.012535775516 0.243457377691 0.139902561309 0.00234038840155 -0.233145618021 -0.202511295537 -0.205213934576 -0.282382609869 -0.0522875192656 -0.118492680801 -0.0408587150245 -0.123259015824 0.0127730937756 -0.204259063404 0.238554322857 -0.438503843998 0.0879420664434 0.242682190451 0.181467888842 -0.0458063349927 -0.23397825477 0.179622788855 -0.112393890105 0.203308346489 -0.0964605009762 0.131055212479 0.167982060521 -0.307319364033 0.0285915893214 -0.167769730206 -0.113129453179 0.0467844225079 -0.0924013533492 0.0617591156044 0.0554905469031 0.305459965206 0.285618784405 -0.00146559860433 0.212716519082 0.162940834212 0.0563956774703 0.0358429818815 0.0615222397514 -0.386929949285 -0.0781505906705 -0.251854419412 -0.559141082072 -0.25650747397 -0.208663610623 -0.383692645229 -0.27443670412 -0.0733478605428 -0.382224768152 -0.287961093462 -0.258707642424 -0.443068095061 -0.318939753516 -0.220109709209 -0.120567632726 -0.490327105671 -0.454878167248 -0.328934678928 -0.373441892953 -0.302236322531 -0.343570656432 -0.348823106162 -0.632188420397 -0.601468703256 -0.308462246356 -0.550622084673 -0.297999141551 -0.277856348238 -0.397468467889 -0.46255607031 -0.341906235292 -0.207639394296 -0.593975936635 -0.394063289566 -0.452749408899 -0.588604413185 -0.484039145493 -0.467682976792 -0.570042679676 -0.616608549916 -0.388465335871 -0.581955450103 -0.641015075055 -0.599025865115 -0.263277499411 -0.753161228256 -0.364107218972 [-0.74548425 -0.10676999 -0.26343845 -0.10310178 -0.37801798 -0.10760934 -0.08832275 -0.29473649 -0.15636805 -0.04346512 0.22971011 0.09996439 -0.05814817 -0.31193632 -0.29161024 -0.28054979 -0.37100753 -0.13604634 -0.23186284 -0.15145258 -0.24769389 -0.11975889 -0.35477675 0.12805496 -0.61281745 -0.09008319 0.08363732 0.04590008 -0.17946616 -0.41784648 0.01415799 -0.26831074 0.00495779 -0.33163248 -0.06181344 -0.01941406 -0.57099171 -0.21120815 -0.41608744 -0.35433712 -0.20339223 -0.40233702 -0.16105573 -0.25073517 0.10789847 0.05613956 -0.30515873 -0.08671764 -0.09821648 -0.24136135 -0.38645477 -0.27811169 -0.78218273 -0.45960356 -0.58422617 -0.95900048 -0.61979534 -0.5521726 -0.70804258 -0.6036904 -0.3452138 -0.6991813 -0.57097125 -0.54417457 -0.7169404 -0.59564974 -0.464716 -0.34359456 -0.71152534 -0.70734699 -0.58224634 -0.58682911 -0.53000367 -0.54280819 -0.53243913 -0.83433378 -0.78803355 -0.48463193 -0.71320979 -0.4516949 -0.42392039 -0.53854867 -0.59931022 -0.46062597 -0.30663777 -0.71128572 -0.48913389 -0.55504628 -0.6776158 -0.56932385 -0.54813245 -0.63484561 -0.6740802 -0.44129478 -0.62643596 -0.67791027 -0.62853313 -0.28249164 -0.76917025 -0.37090589] ========== Epoch 10 ========== Train loss: 0.622583521045 Accuracy: 0.59 -0.745484254676 -0.0985907559605 -0.248444378069 -0.081912442392 -0.348248914638 -0.0744314830735 -0.0479846238235 -0.24578680139 -0.100929845284 0.0144226324815 0.286635851519 0.166673979532 0.0181472281333 -0.225051664692 -0.196487630019 -0.184108622406 -0.265317318967 -0.0286572524339 -0.110173404392 -0.0266491195529 -0.11311925328 0.0212591815574 -0.200983370307 0.266761205464 -0.439693351554 0.0861891306983 0.254339856408 0.209196546276 -0.0130576556681 -0.223358919013 0.20065117565 -0.0817445159466 0.213068940952 -0.0997480467094 0.150572863056 0.192463392145 -0.313054905502 0.0358332274097 -0.159879874326 -0.0991146087838 0.0585863193167 -0.102540732176 0.0909002479571 0.0520159178363 0.347610214277 0.316582926565 0.00351788527298 0.220496803091 0.188545832179 0.0705463107879 0.00449985171937 0.0583121382393 -0.412126383395 -0.103858376072 -0.259256779688 -0.596613548885 -0.284259255986 -0.232055033725 -0.400830571689 -0.298684198783 -0.0751744815118 -0.40826922651 -0.301545515458 -0.278089236815 -0.459213679696 -0.342014506705 -0.230178631988 -0.123045464836 -0.49330442527 -0.482211532764 -0.363339038292 -0.386846511467 -0.331805741646 -0.358612583898 -0.357712513158 -0.659739636818 -0.622790403534 -0.329147735549 -0.565029431147 -0.312233749012 -0.292499549955 -0.414570902909 -0.483135884757 -0.351128115532 -0.20190423115 -0.618790902692 -0.399413681068 -0.479030005434 -0.606116894872 -0.508320655921 -0.498899574582 -0.585438092113 -0.630919639982 -0.410148960854 -0.602743269794 -0.660956141818 -0.618119086322 -0.262450262987 -0.78517515009 -0.368698683115 [-0.7234366 -0.08318983 -0.2387682 -0.08688173 -0.36726508 -0.07352376 -0.06645539 -0.27881992 -0.1424445 -0.0153256 0.27373668 0.12808279 -0.04064547 -0.30188461 -0.28355552 -0.25775389 -0.35207978 -0.11069491 -0.22136666 -0.13519618 -0.2353474 -0.10901221 -0.34904422 0.15802369 -0.61130454 -0.08920172 0.09755353 0.07551488 -0.14485723 -0.40465859 0.03744367 -0.23555641 0.01742035 -0.33177329 -0.03982466 0.00745152 -0.57334576 -0.20102652 -0.40521578 -0.33749456 -0.18868478 -0.40888084 -0.12949745 -0.25078411 0.15211863 0.08959818 -0.2967948 -0.07566377 -0.06981861 -0.2239896 -0.4131857 -0.27768679 -0.8030374 -0.48115085 -0.58804743 -0.99207016 -0.64366299 -0.57194044 -0.72178858 -0.62448313 -0.34423248 -0.72184691 -0.58158735 -0.56053344 -0.73019453 -0.61578839 -0.47222159 -0.34370546 -0.71209055 -0.73183716 -0.61384066 -0.59795751 -0.55711052 -0.55575772 -0.53939066 -0.85969632 -0.80739266 -0.5034877 -0.72594461 -0.46434024 -0.43703567 -0.55415656 -0.61843651 -0.46860583 -0.29984182 -0.73471966 -0.49339905 -0.58008886 -0.69408513 -0.59260066 -0.57842738 -0.64956438 -0.68778667 -0.46241452 -0.64676647 -0.6974942 -0.64736501 -0.2815197 -0.80099133 -0.37547894] -0.723436601081 -0.0750688554385 -0.223905865928 -0.0658935200631 -0.337764508497 -0.0406131790601 -0.0264896079664 -0.230328173452 -0.087507349641 0.0420535611129 0.330132507003 0.194105557648 0.0348461619554 -0.215893732187 -0.189363528265 -0.162214478046 -0.247378435896 -0.00430735952071 -0.100820582061 -0.0115350934413 -0.101989650539 0.0307555675352 -0.196579562788 0.295555104073 -0.439653913089 0.0856353992934 0.26691039315 0.237514393014 0.0201925904484 -0.211780652862 0.222395805398 -0.0505550096147 0.223765584341 -0.101819032652 0.170828579585 0.217577597312 -0.317519904784 0.0440575862221 -0.151009950964 -0.0842495723565 0.0712684758234 -0.111354771265 0.120567721776 0.0497164291911 0.390005334707 0.348014615849 0.00950397497976 0.229206105906 0.214725640202 0.0855041502859 -0.025134166719 0.0562450808802 -0.435719076192 -0.127980086299 -0.265428284494 -0.632248467721 -0.310393408918 -0.253924189012 -0.416563170203 -0.32140069525 -0.0759253679516 -0.432752464836 -0.313822810756 -0.296058730189 -0.474003790253 -0.363613550119 -0.239034198366 -0.124466677977 -0.495197175466 -0.508000700469 -0.396078434677 -0.398991278281 -0.359817190443 -0.372376527866 -0.36544527667 -0.685769264677 -0.642716669908 -0.348472482864 -0.578185724248 -0.325239927624 -0.305912754876 -0.430395537838 -0.502374488946 -0.359232662003 -0.195341879361 -0.642193293655 -0.403729449493 -0.503889446347 -0.622368849916 -0.531227610714 -0.528620787466 -0.599629751403 -0.644049100867 -0.430533013966 -0.622242920121 -0.679626611376 -0.635964539694 -0.260759568363 -0.815708222231 -0.372332535619 [-0.70089616 -0.05917794 -0.21367771 -0.07009727 -0.35583472 -0.03919604 -0.04412634 -0.26232346 -0.12791229 0.01316003 0.3178083 0.15654019 -0.02260387 -0.29114622 -0.27477915 -0.23450302 -0.33262251 -0.08494219 -0.21019601 -0.11837351 -0.22235902 -0.09760146 -0.3425448 0.18829654 -0.60893509 -0.08747963 0.11206598 0.10544449 -0.11001185 -0.39083511 0.06116008 -0.2025275 0.03050949 -0.3310424 -0.01737776 0.03468495 -0.57477564 -0.19016583 -0.39366681 -0.32008569 -0.17338037 -0.41443368 -0.09764862 -0.24996723 0.19638671 0.12330171 -0.28771408 -0.06395308 -0.04107642 -0.20606702 -0.43856207 -0.2764032 -0.82262289 -0.50143333 -0.59091735 -1.02364092 -0.66621557 -0.59047075 -0.73439761 -0.64401713 -0.34239901 -0.74321499 -0.59113014 -0.57571674 -0.74232009 -0.63468054 -0.47871557 -0.34294457 -0.71175654 -0.75499645 -0.64398238 -0.60800252 -0.58284852 -0.56759328 -0.54533705 -0.88370558 -0.82550929 -0.52112593 -0.73756043 -0.47588187 -0.44904021 -0.56860272 -0.63633438 -0.47556573 -0.2923007 -0.75684157 -0.49670982 -0.60379854 -0.70936992 -0.61457569 -0.60729495 -0.66313244 -0.70035866 -0.48227812 -0.66584542 -0.71583734
MIT
Lesson 3: Introduction to Neural Networks/4 GradientDescent.ipynb
makeithappenlois/Udacity-AI
Monitoring & Reporting What `pipeline.py` is doing:- Load: - Monitoring & Reporting Data- Link MPRNs to GPRNs CaveatThe M&R data is publicly available, however, the user still needs to [create their own s3 credentials](https://aws.amazon.com/s3/) to fully reproduce the pipeline this pipeline (*i.e. they need an AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY*) Setup| ❗ Skip if running on Binder ||-------------------------------|Via [conda](https://github.com/conda-forge/miniforge):
conda env create --file environment.yml conda activate hdd
_____no_output_____
MIT
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
Run
python pipeline.py
_____no_output_____
MIT
combine-monitoring-and-reporting-mprns-and-gprns/README.ipynb
Rebeccacachia/projects
Time and Dates The `astropy.time` package provides functionality for manipulating times and dates. Specific emphasis is placed on supporting time scales (e.g. UTC, TAI, UT1, TDB) and time representations (e.g. JD, MJD, ISO 8601) that are used in astronomy and required to calculate, e.g., sidereal times and barycentric corrections. It uses Cython to wrap the C language ERFA time and calendar routines, using a fast and memory efficient vectorization scheme.All time manipulations and arithmetic operations are done internally using two 64-bit floats to represent time. Floating point algorithms are used so that the Time object maintains sub-nanosecond precision over times spanning the age of the universe. The basic way to use `astropy.time` is to create a `Time` object by supplying one or more input time values as well as the time format and time scale of those values. The input time(s) can either be a single scalar like `"2010-01-01 00:00:00"` or a list or a numpy array of values as shown below. In general any output values have the same shape (scalar or array) as the input.
import numpy as np from astropy.time import Time times = ['1999-01-01T00:00:00.123456789', '2010-01-01T00:00:00'] t = Time(times, format='isot', scale='utc') t t[1]
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
The `format` argument specifies how to interpret the input values, e.g. ISO or JD or Unix time. The `scale` argument specifies the time scale for the values, e.g. UTC or TT or UT1. The `scale` argument is optional and defaults to UTC except for Time from epoch formats. We could have written the above as:
t = Time(times, format='isot')
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
When the format of the input can be unambiguously determined then the format argument is not required, so we can simplify even further:
t = Time(times) t
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
Now let’s get the representation of these times in the JD and MJD formats by requesting the corresponding Time attributes:
t.jd t.mjd
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022
The default representation can be changed by setting the `format` attribute:
t.format = 'fits' t t.format = 'isot' t
_____no_output_____
Unlicense
day4/06. Astropy - Time.ipynb
ubutnux/bosscha-python-workshop-2022