Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
500
The Backend Tech Stack at FanAI
Overview FanAI is a company built around data, loads of it. In this post, we’ll discuss the infrastructure we’ve put in place to handle big data workloads and APIs in a scalable, resilient way. Overview of FanAI Platform Cloud Platform Early on, we chose to build on Google Cloud Platform (GCP) for the following reasons: GCP is unapologetically Kubernetes native. We knew we wanted to build on top of Kubernetes and the Google Kubernetes Engine (GKE) is the industry standard for managing Kubernetes. GKE is the only managed Kubernetes that provides automated master and node upgrades and it integrates in with Stackdriver (for logging and monitoring) by default. GCP is great for dealing with big data. It comes with arguably the best data warehousing solution on the market — Big Query. GCP integrates seamlessly with Firebase / Firestore, which we prefer to use on our frontend for authentication and other user-centric functionality. GCP integrates seamlessly with the Google Suite, which makes it really easy to set up IAM rules and polices that just fit with your organizational structure and roles. Security and privacy of data is extremely important to us, and we’re a small team, so it saves a lot of headaches to have this tight integration Python Monorepo Historically, we were building our platform in more of a SOA (service oriented architecture), but in the last year have moved towards a monorepo for improved productivity and developer experience. With a small team, and no dedicated devops or infrastructure people yet, we realized that there was just too much setup and boilerplate involved to warrant maintaining multiple repos. Having said that, it’s important to note that our platform is not monolithic. There are clear boundaries and responsibilities for each service and we have the ability to deploy each service independently of the others, even though they are in the same code repository. The monorepo allows us to easily share code amongst individually deployed applications. These individually deployed applications, however, do not import from each other, but rather from a shared core python module in order to prevent coupling between the individual releases. This will allow us to easily break them out into individual repos if we ever want to in the future, like the case where once we grow as a company and have the resources to devote single teams to each release. We try to stay on the cutting edge of stability with Python. We’re currently running Python 3.7. Kubernetes As I mentioned previously, 90% of our backend runs in Kubernetes. We utilize GKE (Google Kubernetes Engine) to manage the underlying GCP Kubernetes infrastructure for us and we define / manage groupings of Kubernetes resources using Helm. We define a lot in GKE, including workloads for APIs, ETLs, and databases, and ingress into our services. We’ve moved away from standing up databases in Kubernetes, however, preferring to utilize managed storage services from GCP. We’ll talk more about that in the Storage section below. APIs We have a set of RESTful APIs that we’ve written in Python and stood up in Kubernetes. We utilize the hug API framework to enable faster API creation utilizing annotations, easy versioning, and painless documentation. Hug is also one of the faster API frameworks, according to Pycnic. Taken from https://www.hug.rest/ ETLs / Data Gathering We also have a set of ETLs written in Python, stood up in Kubernetes, and glued together via PubSub. We have 3 core ETLs that are providing data to FanAI data scientists, analysts, and the frontend 24x7: Twitch Twitter FullContact (demographic enrichment) We’ll typically define each piece of the ETL (extract transform load) as its own workload that is either kicked off periodically via a schedule / cron or event-driven, and kicked off via its subscription to some PubSub topic. CI/CD We currently just maintain 2 environments: staging and production. We use Google Cloud Build as our build server. We run a number of checks and linting in a pre-commit hook including (but not limited to): Linting via Flake8 Bandit for security issues Then, for each pull request, we run a number of automated checks and tests including (but not limited to): Unit tests Functional tests Upon code being merged into our develop branch, we build a Docker image using previously built images as a cache to reduce build time, and run all of our functional, unit, and integration tests on the container. We then automatically deploy to our stage environment and kick off a set of Postman API regression tests. This is the final check. If the API tests fail, we roll back the release and any database migration that was performed. Storage We utilize a number of different storage technologies to meet our storage, security, and privacy needs as a big data analytics company. ArangoDB Arango is a multi-model database which we utilize for its graph database capabilities. We prefer a graph database to model the relationships between all sorts of entities in our universe. For example, a player can be on a roster, a roster could be part of a team, a team could be in a league, and so forth. We also model the relationships between brands and sub-brands and how they’re categorized. ElasticSearch With all the entities in our graph DB, we needed a fast way to search through them, so we indexed them in ElasticSearch and provide APIs for our clients to quickly find the entity they’re looking for. Postgres We utilize Postgres as our main transactional DB. Historically we had stood up Postgres within Kubernetes, but decided in the last year that the operational overhead wasn’t worth it with our small team, and now rely on Google’s CloudSQL managed Postgres. Redis We utilize Redis primarily for caching the results of web requests and queries that take a long time to complete, in order to improve the performance of our system. Similar to Postgres, we used to stand up Redis inside of Kubernetes, but have migrated to using Cloud MemoryStore. Cloud Storage We utilize Cloud Storage primarily as a means to ingest data from our clients in a secure manner and ensure that it is segregated from other clients’ data. Cloud Storage gives us the ability to create signed URLs to ensure only select clients are able to upload data to us and they can upload their data directly to Cloud Storage rather than passing it through our API layer first. Closing Thanks for reading. We’ll surely have follow on posts that go into more detail on all the different parts of our big data platform. If there’s anything you’re curious about, feel free to hit us up. We always love to talk shop. And one more thing: If you are a strong backend developer that would enjoy working on an exciting sponsorship performance analytics platform, send us your resume!
https://medium.com/fanai-engineering/the-backend-tech-stack-at-fanai-b48b4a61de19
['Karim Varela']
2020-06-11 22:05:39.894000+00:00
['Python Web Developer', 'Technology', 'Backend Development', 'Google Cloud Platform', 'Big Data Analytics']
501
Snowflake’s IPO and the Data Blizzard
Snowflake’s IPO and the Data Blizzard 👉 Find more of this analysis at The Generalist. Snowflake in 1 minute Since its founding in 2012, Snowflake has grown into a leader in the cloud data warehousing space. The company’s intuitive solution makes it easy to run analytical queries with a customer-aligned billing model. Intriguingly, Snowflake has established itself without the founder-led culture that dominates most of the tech industry. Instead, three external CEOs have guided the company to a ~$500M annual revenue run rate, with 132% year-over-year growth. That’s attracted pre-IPO investment from Warren Buffett — a rare foray into tech for the Oracle of Omaha. Big winners of the listing include the secretive Sutter Hill Ventures and Altimeter Partners. Snowflake’s toughest task may be in weathering the coming storm. Amazon, Microsoft, and Google all have products in the space, and with limited enduring defensibility, the company could see its position pressured. As a member of The Generalist, you have access to The S-1 Club’s full report. To learn about Snowflake’s kooky founders, mercurial CEO Frank Slootman, and the mission to win the “data cloud,” keep reading. If you enjoy this analysis, we’d appreciate you sharing it. Analysts David Wei Danielle Morrill Sam Parr Justin Gage Annika Lewis Aashay Sanghvi Karine Hsu Mario Gabriele Introduction One last job. It’s a scene you’ve watched before. A grizzled pro approached by an old friend in need of help. Perhaps they meet in a dingy basement, under the veil of secrecy, or share words in a banal local diner or coffee shop. The friend pleads with the pro. A daughter is sick, a partner in danger, a brother in a bad spot with some worse people. Or sometimes it’s a question of upside — the money is just too good to pass up. One last job, the friend beseeches. It always works. It has to — otherwise, there would be no film. The “One Last Job” premise is so popular it’s attained the status of a trope, serving as the basis for iconic films like Heat, Inception, Gone in Sixty Seconds, and countless others. After a period of reluctance, the pro succumbs. The job is on and the game afoot. In 2018, Frank Slootman was retired and comfortable. The bolshy Dutchman had fulfilled his destiny in the corporate world, transforming Data Domain from a 20-person company into an “800-pound gorilla” doing $1B in sales, before taking the reins at ServiceNow. Starting in 2011, Slootman supercharged the business, increasing revenue from $75M to $1.5B over his six-year spell. Now it was time to relax. To sit on a few boards and enjoy life on the water. Slootman’s sailboat, The Invisible Hand (really), had won the Transpac Honolulu Race the year prior. A man like Slootman would surely have seen such success not as a culmination but the start of sustained dominance. Renegade Sailing But then Slootman got a call. We can only speculate as to who made first contact. Before long, though, Slootman was talking to a familiar face: Mike Spieser. The two men had gotten to know each other starting in 2014 as both served on the board of Pure Storage, a fast-growing data storage company. Mutual respect grew. If anyone knew Snowflake and what the company needed, it was Spieser. He’d been there since the company’s early days, both as an investor through his firm Sutter Hill Ventures (SHV), and as an operator — he served as Snowflake’s first CEO before passing the torch to Bob Muglia. Formerly of Microsoft and Juniper Networks, Muglia had performed admirably, overseeing a period of breakout growth, but when it became clear Slootman might be available, Snowflake’s board pounced. The message the former ServiceNow chief received was clear: “We want you.” According to Slootman, he didn’t consider any other roles, and if the opportunity had come a year earlier, he would have turned it down. But, “Snowflake [was] a special company…they just don’t come along often.” For Slootman, it was time for one last job. This is his story, in part, but more importantly, the tale of a company too good to pass up. Snowflake’s history Though Slootman and Spieser are central to this filing’s tale, they are not where the adventure begins. To understand Snowflake’s inception and its culture, there’s only one place to start: the first 0:25 of this video. If this screenshot of founders Benoit Dageville and Thierry Cruanes doesn’t entice you, we’re not sure what will. Benoit and Thierry are kooky, playful, anything but boring. In that respect, they aren’t what you’d picture when asked to imagine two long-time data architects. Back in the summer of 2012, these jokesters left their presumably-high-paying-and-cushy Oracle jobs. They teamed up with Marcin Zukowski, the former founder of Vetcorwise, a database management company. The trio had a big idea: build a cloud-native data warehouse from the ground up. To be clear, this is probably the least casual of ambitions one could have. Building a successful company as an entrepreneur is a daring aspiration in and of itself, but the complexity and scope of Benoit and Thierry’s challenge is hard to overstate. For those that aren’t intimately familiar with enterprise data infrastructure, this isn’t your mother’s mobile app. This is even far from standard B2B SaaS. Data warehouses are, well, absolute behemoths. But the trio knew what they were doing. Benoit and Thierry had worked on legacy architectures at Oracle, understood the drawbacks of the then-newly-hyped Hadoop, and knew there had to be a better way. Between them, they already held over 120 patents — this was not a team light on experience. Still, the task was formidable. The team built, often beneath a veil of secrecy, emerging from stealth with something unique, one-of-a-kind. A Snowflake, if you will. Interestingly, none of the founders have ever served as CEO, though they remain key leaders: Benoit acts as President of Products, while Thierry is CTO. Marcin may have taken a step back — his LinkedIn identifies him as a “co-founder.” Instead, as alluded to above, Snowflake has been helmed by three outside CEOs over the last six years. First Spieser, then Muglia, and finally Slootman. That pattern rarely screams “wild success,” but Snowflake is the exception to the rule: the company has strategically leveled-up over time, identifying the right leader at the right time to maintain their startling trajectory. While every company has its trials and tribulations under-the-hood, from the outside, Snowflake has executed, dare we say it, perfectly. From January 31, 2019, to January 31, 2020, the company grew revenue by over 173% to $264M, with the following six months seeing Snowflake hit an Annual Run Rate (ARR) of $483M. Though Snowflake is still operating at a loss — losing $348M as of January 2020 — that growth has allowed them to access capital at a jaw-dropping clip. Snowflake has raised $1.4B with $1.1B arriving in the past two years. Crunchbase Safe to say: it’s snowing in data land. Share Number of mentions in the S-1 Data: 697 Capital One: 31 Data Cloud: 26 Cloud Data Platform: 28 Silo: 15 Sutter Hill Ventures: 1 Scarcity: 1 Market Wintery tailwinds are expected to power growth in the world of data. While Snowflake pegs their market at ~$81B it may be even more extensive than that. As of 2018, the International Data Corporation (IDC) found global data storage generated $88B in revenue with a capacity of 700 exabytes added. By 2023, that figure is expected to nearly double, reaching ~$176B in revenue and 11.7 zettabytes in storage. The data warehousing market was believed to be $13B in 2018, rising to $30B by 2025, a 12% CAGR. The APAC region is expected to grow more quickly, at a 15% CAGR. Encouragingly for Snowflake, incremental capacity and renewals are expected to be driven by the cloud. Gartner predicts 75% of databases will be in the cloud by 2022, with just 5% “repatriated” on-premises. Other sources suggest that hybrid strategies — the marriage of cloud and on-premise data warehousing — may expand rapidly. Just as encouragingly, “unstructured data” — a significant driver of Snowflake’s rise — is growing between 40–50% per year. It’s worth noting that Snowflake is already well-established in the space. Per Datanyze, the company is the third-largest with an estimated 10.08% market share, just behind SAP Business Warehouse and Apache Hive. Mordor Intelligence Gartner recognized Snowflake as a “Leader” though the company did trail more familiar names like Oracle, Microsoft, and AWS in terms of “completeness of vision” and “ability to execute.” Gartner’s Magic Quadrant Finally, it’s worth noting that the market seems to be up for grabs with considerable fragmentation in the space. That’s created a fluid dynamic between businesses as rivals compete and collaborate. That structure has aided Snowflake’s rise over the past eight years but could make data domination a considerable challenge. In “Competition,” we’ll dive into one method to track Snowflake’s market position, focused on “developer mindshare.” Mordor Intelligence What is a database? If you’ve gotten this far, you probably know that Snowflake is a database. But what does that mean? How does Snowflake compare to multi-billion dollar incumbents like Oracle and MongoDB? Fundamentally, there are two types of databases: transactional and analytical. Snowflake is printing money by excelling at the latter. Transactional databases are all around us. With few exceptions, every app is built on a database. That’s what allows developers to store user emails, preferences, account information, and anything else that gets customized. These are called transactional databases because the data is generated through the little transactions that happen every time a user takes an action: A user signs up → add a row to the users’ table A user changes their password → update an existing row A user loads their profile page → read from the users’ table If your app has many users, these transactions constantly occur (we’re talking about thousands per second). As a result, early databases were optimized for this use case: to add and update rows quickly and ensure data wasn’t corrupted. They’re also well-suited to filter specific sets of data. Over time, teams realized that all of the data they collected could be pretty valuable. By analyzing it, they might be able to answer questions critical to their business. How often did users log in? Where were they based? What was the most popular pricing plan? The problem was that the kinds of queries required to unearth this information are markedly different from those used in transactional databases. They’re typically longer, more resource-intensive, and involve combining data sources from multiple places. Transactional databases weren’t designed to handle them. Enter the data warehouse. To better serve these types of queries, teams began designing databases specifically for analytics, copying data from their transactional database into this new analytical database. These new structures were optimized for gnarly, analytical queries, as was the underlying infrastructure (servers). Thanks to their more fine-toothed functionality, teams have ended up storing orders of magnitude more data in analytical warehouses, tracking events like website page views and product interactions. This is what Snowflake is: an analytical database. More precisely, it’s an analytical database that requires no infrastructure management from its users, often referred to as a “managed data warehouse.” Users don’t need to deal with deploying servers, scaling, networking, or managing permissions — they just dump data into Snowflake and query it (extremely) quickly. Snowflake’s even built an in-product query engine to accelerate queries through low-level optimizations. The Product Snowflake isn’t the only managed data warehouse on the market — Amazon Web Service’s (AWS) Redshift and Google Cloud Platform’s (GCP) BigQuery are popular options. What is it that has made Snowflake such a success? Above all, it’s a combination of flexibility, service, and UI. Splitting storage and compute It’s worth noting that with a database like Snowflake, two pieces of infrastructure keep the lights on: storage and compute. Snowflake takes care of efficiently storing your petabytes of data and making sure your queries run quickly. This idea — separating storage and compute in a data warehouse — was relatively novel when Snowflake started. Today, entire query engines like Presto exist just to run queries efficiently, with no storage included. There are some clear advantages to splitting storage and compute: stored data is located remotely on the cloud, saving local resources for the compute load. Moving storage to the cloud delivers lower cost, has higher availability, and provides greater scalability. No vendor lock-in CIOs — and increasingly, developers — don’t want to be tied to a specific cloud provider. The majority of enterprises have adopted a multi-cloud approach (and that can mean many things). As such, there’s a natural reluctance to choose solutions like BigQuery that are native to a single cloud (Google, in this case). Snowflake provides a different level of flexibility. It runs on AWS, Azure, or GCP, satisfying the “multi-cloud checkbox” that enterprise buyers have. Often such a strategy is theoretical rather than concrete. Amidst tech’s giants battling for control of the cloud, Snowflake is a sort of warehousing Switzerland. Of course, this isn’t to say that there’s no lock-in. By signing up to Snowflake, users are committed to a vendor: Snowflake, itself. Fully managed If you want to build a data warehouse from scratch, there’s a lot of infrastructure to manage. Even if you’re outsourcing your servers to a cloud provider, you still need to deal with sizing the right instance (4GB RAM? 48GB RAM?), scaling as your workloads grow, and networking components together. If you use a managed service like AWS’s Redshift, you need to worry about handling sizing and scaling. As mentioned above, Snowflake is a fully managed service — users don’t need to worry about any infrastructure at all. You just put your data into the system and query it. While fully managed services sound great, no good deed goes unpunished, which means there’s a tradeoff: price. Snowflake customers need to be intentional about storing and querying their data because fully managed services are expensive. When teams decide whether to build or buy, they need to forecast cost savings by comparing Snowflake ownership’s total cost to building something themselves. Impressive UI and SQL ergonomics Snowflake’s UI for querying and exploring tables is surprisingly pleasant to look at and intuitive to use. Snowflake’s product Another nice touch from Snowflake is their SQL ergonomics. SQL (Structured Query Language) is the programming language that developers and data scientists use to query their databases. Each database (PostgreSQL, MySQL, etc.) has its own SQL flavor, with slightly different details, wording, and structure. Snowflake’s SQL seems to have gathered the best from many SQL dialects, adding useful functions. A great example is the PIVOT function, which regular SQL tends not to have but saves analysts a bunch of time (if you’ve ever tried to create a pivot table from scratch, you’ll understand). Data sharing In a bid to establish its network effect credentials, Snowflake outlines its “data sharing” functionality in the S-1. As it stands, this appears to be a relatively immature part of Snowflake’s product, though it does seem to have had some use during the COVID outbreak. The S-1 notes that one client shared epidemiological data via the company’s “Data Marketplace,” subsequently viewed by “hundreds” of customers. A representation of this interaction can be found below. Snowflake’s S-1 filing Business Model “I…[do] not like SaaS that much as a business model, felt it not equitable for customers,” Slootman wrote in 2019, discussing his decision to join Snowflake. The concept of equity and alignment is a critical part of the company’s business model. Rather than charging per seat on a monthly or annual basis, Snowflake bills customers based on usage, offering on-demand and upfront billing. Discounts are given for the latter. Furthermore, Snowflake bills its three core functions — storage, compute, and cloud services — separately. Compute is the larger financial commitment. For example, purchasing 4TBs of data storage per year and 955 in compute credits costs roughly $23K per year. Approximately $22K would be spent on compute, with just $1K allocated to storage. Public Comps, Snowflake This model is suited to the large enterprise contracts on which Snowflake focuses. The company boasts recognizable logos across sectors, including media, financial services, healthcare, manufacturing, and technology. Sample customers include sleepier businesses like Office Depot, McKesson, and Nielsen, alongside startups DoorDash, Instacart, and Rent the Runway. Snowflake’s S-1 filing Snowflake primarily leverages a top-down sales motion to secure these deals, relying on robust sales development, inside sales, and other related teams. Indeed, of all public SaaS companies, Snowflake spends most on Sales & Market as a percentage of revenue at 70%. As a comparison, Zoom spends just 37%. Capital One is a particularly crucial relationship. In 2017, Snowflake signed the financial giant and has successfully deepened the relationship since. For the fiscal year ending January 31, 2020, Capital One represented 11% of Snowflake’s revenue — both a feat and a concern. That’s significant customer concentration, a worry slightly mitigated when looking at the rest of Snowflake’s customer base: the company has 3,117 customers. Of the Fortune 500, Snowflake counts 146 as clients, and they contribute 26% of revenue. Reasonable enough. The signing of Capital One is also a sign of the trust Snowflake has built. The company has earned HIPAA and FedRAMP compliance certifications, improving security. That’s important for all companies, but particularly those that handle sensitive information. The fact that Snowflake has signed Capital One, Experian, and DocuSign is indicative of the reliable reputation they have earned in the market. Management team As discussed in “Snowflake’s History,” co-founders Dageville, Cruanes, and Zukowski are all still with the company. But while Snowflake’s success began with the three co-founders’ vision and technical expertise, Snowflake has blossomed under the leadership of external CEOs, chiefly Bob Muglia (2014–2019) and Frank Slootman (2019-present). Slootman most recently served as CEO of ServiceNow from 2011 to 2017 and is an operations guru known for being something of an IPO specialist after ushering ServiceNow, Data Domain, and now Snowflake into the public markets. That’s been achieved by focusing on boosting sales and trimming corporate expenditures. Slootman’s track record at his previous postings is impeccable. ServiceNow grew revenue from $100M to $1.4B in revenue during his tenure, while Data Domain was guided to a $2.4B acquisition from EMC. From an incentive perspective, executive compensation is mostly cash salary with both an equity and non-equity component primarily determined by the compensation committee. Specific KPIs are largely lacking: Snowflake’s S-1 filing It’s interesting to see that Slootman (5.9% holds more of the company than founder Dageville (3.9%). Along with Muglia (3.3%), those three men make up the largest affiliated equity holders. SHV leads external investors with 20.3%. Snowflake’s S-1 filing Investors When unpacking Snowflake’s investor base, there’s only one place to begin: SHV, venture capital’s silent assassins. It’s hard to find much information about the firm, despite having been in business since 1962. When you navigate to Sutter’s website, you’re met by a wall of blue, a logo, and an address. That’s it. In a funding environment in which firms pay escalating amounts to brand themselves and establish an online presence, SHV’s minimalism is a consummate flex. (See also: Benchmark). Sutter Hill’s website Whatever game the market, SHV isn’t bothered. They know what they’re doing, and firms are unlikely to attain such staying power without success. According to Crunchbase, the firm has made 287 investments with 81 exits along the way. That includes familiar names like Sumo Logic (another impending IPO), GlassDoor, Smartsheet, and Yext. But perhaps more interesting than SHV’s portfolio is their approach. While we’ve seen new venture funds come to market in the last couple of years outlining a “studio” approach, SHV has played the game for at least a decade. In an interview with Strictly VC, Managing Partner Sam Pullara describes the firm’s practice of “origination” — building companies in-house with external entrepreneurs. This is a strategy that runs counter to the narrative that funds can’t incubate enduring businesses. “The best example of what we do is Pure Storage, which my [Sutter Hill] partner Mike Speiser helped start when he was leaving Yahoo to come here. He worked with [founder and CTO] John Colgrove for eight months, trying to figure out the best company to start in flash storage. John started it, and Mike joined as interim CEO as John built out the team. Then Aneel Bhusri of Greylock [Partners] did the Series B, and the rest of that is going amazing.” Though details are scant, Snowflake seems to have been built with the same approach. Indeed, it’s hard to overstate just how involved SHV appears to have been in its success. From what we understand, Speiser helped the original architects design the commercial organization around the product, with Snowflake’s sales and go-to-market teams running through SHV until recently. That hard work has paid off. At IPO, SHV is the largest external shareholder with a 20.3% stake in Snowflake. They’re followed by Altimeter Partners (14.8%), ICONIQ Partners (13.8%), Redpoint Ventures (9.0%), and, inevitably, Sequoia Capital (8.4%). As mentioned earlier, Snowflake has raised a total of $1.4B on its path to IPO, illustrating that while costs are relatively low to start software businesses, scaling is expensive. Critical rounds include the $5M Series A led by SHV in 2012, a $26M Series B round led by Redpoint in 2014, and the monster $450M Series F in late 2018, spearheaded by Sequoia. As part of that round, enterprise software veteran Carl Eschenbach — formerly of VMWare and involved with the firm’s investments in UiPath and Gong — took a board seat. There’s one final investor worth noting: Warren Buffett. The Oracle of Omaha is expected to plow $570M into Snowflake, investing $250M in a private placement at $80 per share. He’s agreed to purchase a further 4M shares from former CEO Muglia at the IPO price, which could cost between $303 and $344M. Salesforce Ventures is also grabbing $250M as part of the private placement. Given Berkshire Hathaway’s reticence to invest in tech companies, particularly those operating at a loss like Snowflake, this show of confidence represents something of a coup. Financial highlights Ask an investor to wave a wand and create a business, and it might end up looking a lot like Snowflake. There’s plenty to admire in the company’s performance to date. With that in mind, we’re going to focus on three fundamental pieces of the business: revenue growth, net revenue retention, and the margin profile. Revenue growth There are no two ways of putting it: Snowflake is growing fast. As noted under “Snowflake’s History,” revenue expanded 173% over the last two fiscal years and 132% when comparing the six months leading up to July 31, 2019, with the same period in 2020. Some sources have suggested that Snowflake’s trajectory makes them one of the fastest-growing software companies at IPO ever. This has been achieved at a significant scale. As noted earlier, Snowflake logged $264M in its 2020 fiscal year, up from $96.7M. Net revenue retention Customers are using Snowflake more and more. Thanks to the usage-based pricing model, Snowflake has had success “landing and expanding” with their clients. Their most recent six month period showed a net retention rate of 158%, meaning that even accounting for churn, customers spent 58% more than they had compared to the same period in 2019. Remarkably, Snowflake’s net revenue retention stood at 223% the year prior. This compares favorably to several public software companies, including Datadog (146%), Slack (138%), and Slootman’s old haunt, ServiceNow (96%). Moderate margins If there’s one weakness in Snowflake’s financial performance beyond the decision to eschew profitability, it’s the gross margin profile. Unlike many public software companies, Snowflake can’t brag about 75–90% gross margins. Because of where the company sits in the stack, they’re reliant on public cloud providers like AWS, Azure, and GCP. That’s demonstrated graphically in the S-1 filing. Snowflake’s S-1 filing As a result, gross margins are comparatively modest, around 60%. This represents an improvement in performance in prior years, which saw margins of 44–45%. Part of that is due to increased negotiating power with those public cloud providers. From the S-1: “Our improved gross margin during the last four quarters presented is primarily attributable to higher volume-based discounts for purchases of third-party cloud infrastructure, increased scale across our cloud infrastructure regions, and improved platform pricing discipline.” Valuation What is Snowflake worth? This is the (multi)billion-dollar question for Slootman, SHV, and Buffett himself. As background, Snowflake last raised capital in February of this year at a $12.4B valuation. There are whispers $ SNOW’s debut may see the business priced north of $20B, with $30B within the realm of possibility given the scalding hot IPO-market. That’s quite a price for a company yet to reach an annual recurring revenue rate (ARRR) of $1B. As referenced in “Market,” Snowflake considers its TAM to be $81B, with additional upside possible should data sharing become a more significant part of its business. The S-1 notes that the “data sharing opportunity” has “not been defined or quantified by any research institutions” but considers it to be “substantial and largely untapped.” Putting that to the side, should Snowflake IPO at a $25B valuation, the company would represent a whopping 30% of its TAM. It would also represent one of the industry’s highest multiples relative to sales: 95x for the 2020 financial year and 52x for the six months ending July 31. Does Snowflake deserve this premium? Ultimately, the company is a best-in-class provider, growing quickly and operating in a rapidly expanding market. If we assume Snowflake’s trajectory continuing with revenues reaching $850M by the 2022 financial year, a sales multiple looks more reasonable at 29x. That’s in-line with other public SaaS companies, including Datadog, Zoom, and Coupa, all who see Enterprise Value (EV) to Next Twelve Months (NTM) sales at roughly 30x. More broadly, high-growth SaaS sees EV/NTM sales between 20–25x. Competition The competitive landscape is one of the most exciting parts of the Snowflake story. As noted, the “frenemies” dynamic seems unique in the enterprise software landscape. The Big Three and beyond To put it simply: Snowflake relies on the big cloud providers (AWS, GCP, Azure) for raw materials. Snowflake’s product, in effect, is storage and compute. Without the big cloud providers, they don’t have storage. Access is mission-critical. Simultaneously, Amazon, Google, and Microsoft each have a directly competing product to what Snowflake offers. Redshift, BigQuery, and Synapse are going after the same customers that Snowflake is. Given those are three of the most successful, highly-capitalized businesses of all-time that’s cause for concern. To date, Snowflake seems to have benefited by being the “Switzerland” of the space — staying neutral as public cloud providers engage in a slugging match. They’ve also capitalized on their first-mover advantage. But looking forward, it’s not clear what barriers prohibit this “Big 3” from catching up. In addition to the big cloud providers’ SQL databases, the OG on-prem SQL data players remain. Oracle, SAP, and Teradata are striving to compete in this brave new (cloud) world. Finally, it’s worth considering a competitor of similar vintage to Snowflake. Databricks, founded in 2013, started as an extension of Apache Spark, initially focusing on the data science part of the analytics stack. Over time, they’ve encroached on Snowflake’s domain, with their new Lakehouse product providing storage. While Snowflake’s been squarely focused on storage (and compute) to date, the company has also suggested an interest in data science workflows. Databricks is a small company relative to the giants listed above, last valued at $6B. But in five years down the line, we may see more robust competition as feature sets converge. Tracking market share Given the space’s competitive dynamics, tracking Snowflake’s market share becomes increasingly important for those interested. Though we’ve mentioned other KPIs above — notably revenue growth, margin profile, and net revenue retention, all relative to competitors — alternative metrics may provide a unique vantage. In particular, we think tracking “developer mindshare” may give a differentiated view on how Snowflake is trending relative to its rivals and is a potential leading indicator of future market share. Leveraging publicly available data on Github, we were able to identify the frequency and velocity of code and repositories on specific technologies. The results are as of August 2020, summarized in the tables below: Taken together, Snowflake shows steady growth in developer mindshare, though both BigQuery is accelerating more rapidly. RedShift shows relatively similar growth, while Hadoop, Cloudera, Synapse, and Apache Hive are in relative decline. Those interested in the stock may want to consider tracking this information going forward. The cautionary tale of Hadoop and Cloudera While all looks rosy for Snowflake at the moment, observers of the space may recall the (relative) demise of similarly hyped business: Cloudera, MapR, and Hortonworks. Swift technological advancements have the potential to benefit insurgents and deposition incumbents. After its release in 2006, Apache’s Hadoop received significant buzz through the early 2010s. Hadoop was an open-source ecosystem, and for a period, the best data warehousing solution for on-prem databases. A trio of companies emerged off the back of Hadoop, serving as vendors. Cloudera, Map R, and Hortonworks helped companies leverage Hadoop’s features to create high-volume data management strategies. That trio dominated the on-prem traditional data warehouse industry before they faced a classic Innovator’s Dilemma. As they focused on servicing their existing market, they failed to devote time and resources to the next frontier: the cloud. Over time, all three lost share to Snowflake and the offerings of the Big 3. Unable to compete architecturally, the trio faced increasing pressure exacerbated by strategic mistakes. In 2019, Cloudera and Hortonworks merged in an all-stock deal. Though Cloudera now trades on the NYSE, it’s $3.9B is a fraction of what it might have been when seeing what Snowflake has managed to achieve. The same year as the Cloudera merger, Hewlett Packard acquired MapR for less than $50M, a far cry from the $280M raised, and believed to be below its revenue. Snowflake was the beneficiary of the evolution of data warehousing towards the cloud, but like all things tech-oriented, it is susceptible to disintermediation via innovation. Today, no threat appears imminent. But mindful investors will remember that the backers of Cloudera, Hortonworks, and MapR may have once felt the same. Why now? There’s a rumor that circulated in May of last year. In an interview with The Information, then-CEO Bob Muglia made an innocuous comment. When asked when Snowflake would go public, he answered that it would be at least a year, maybe more. “We have enough cash and don’t need to go public,” he said. Two weeks later, he was ousted. The man brought in to replace him — Slootman, of course — was a different profile of leader. A disciplined, aggressive CEO with experience taking companies public, having down so twice already. Snowflake is his chance to “three-peat.” All to say that Snowflake has been mulling a listing for some time, and put in place a leader well-suited to make it happen in relatively short order. In October 2019, just five months into his tenure, Slootman stated that he expected Snowflake to go public early summer in 2020 or after the presidential elections. Both the favorable market conditions and coronavirus tailwinds seem to have altered that calculation slightly. Rather than waiting for late November or December, Snowflake is capitalizing on the sympathetic market conditions to list this fall. While investors’ appetite for cloud stocks is undoubtedly a large part of the calculus, there may be another element. As mentioned earlier, Snowflake has executed at breakneck speed. At the back of Slootman and Co.’s mind may be a question: can we keep it up? It’s hard to find fault with Snowflake’s metrics at the moment, but as competition heats up, that may not always be the case. 👉 Find more of this analysis at The Generalist.
https://mariogabriele.medium.com/snowflakes-ipo-and-the-data-blizzard-27ece27c026f
['Mario Gabriele']
2020-09-14 17:08:14.619000+00:00
['Investing', 'Stock Market', 'Data', 'Warren Buffett', 'Technology']
502
My 3 Favourite Gadgets
My 3 Favourite Gadgets Image by Pixabay I'm not the sort of person who is into gadgets just for the sake of having the latest piece of digital trickery. I'm only interested in the ones that make my life better. Number one has to be the re-chargarble, portable modem my wife bought for me in Japan. It is smaller than a smart phone, which means you can take it anywhere you like and not have to worry about looking around for mobile hotspots nor some coffee bar with wifi. Most of the time it just sits near my pc at home doing its job. It is a rental item, though I daresay you can actually buy them. And an added bonus is that if my wife and I decide to get on a plane and go somewhere for a holiday, we can just pop the modem into a small envelope and post it back to the rental company at any old post box. Likewise, if you are visiting Japan, you can rent a modem at the airport, just like you can rent a car, and hand it or post it back just before you leave. Supercool. Number two has to be the white disc with a needle attached which I simply press into my arm and leave it there to constantly monitor my blood sugar level. It comes complete with a hand held monitor which not only gives me an instant reading, but can also be plugged into a computer to download all the saved information onto a programme from which I can then print out as readings to see if I am about to die anytime soon. No more drawing icky blood from my finger tips just to get an on the spot level. Brilliant. Number three has to be my wife's Applemac. Quite frankly, I would be totally lost without it. Yes, it is getting old and is a little underpowered, for which reasons it can be somewhat recalcitrant to say the least. However, it has been an absolute lifesaver for me during these pandemic days. Online writing and editing, online teaching, online every blooming thing. And the thing I really like about the Apple is that, unlike all my previous pc's, it doesn't crash and burn or pick up any nasty virus's. Yes, mine does go into the odd nosedive or tailspin, but it always recovers before it hits the ground and kills me stone dead. I have to say a really big thank you to Mister Steve Jobs for gifting me and the rest of the world something that makes me as happy and as miserable as a man with a sexy, often times stroppy, personal assistant. I am mindful of that song by the Smiths, 'Girlfriend in a coma.' "There are times when I could strangle her, but I'd hate anything to happen to her." Or to put it another way, it's a bit like a partner who you cannot live without nor bury under the patio in the back garden. Tree Langdon Dr Mehmet Yildiz Editor of Technology Hits Dr John Rose Dr. Preeti Singh John Ross John Gruber kurt gasbarra Joe Luca Terry Mansfield Stuart Englander Stuart Grant Agnes Laurens Geetika Sethi Victoria Ichizli-Bartels, Ph.D. Rebecca Stevens A. Maria Rattray Sabana Grande Sushma Sampath CR Mandler MAT Ntathu Allen
https://medium.com/technology-hits/my-3-favourite-gadgets-cb1d90232a19
['Liam Ireland']
2020-12-16 09:30:33.666000+00:00
['Gadgets', 'Writing', 'Short Stories And Poems', 'Humour', 'Technology Hits']
503
7 Things Leading Enterprises Realize About The Data Tipping Point (That Might Help You)
Way way back in 2000, Malcolm Gladwell defined the tipping point as a moment of critical mass when ideas and products and messages and behaviors suddenly spread like viruses. In his case Hush Puppies and Crime rates. Only 16 short years later, the Internet Of All Things (IOaT) and Cloud have already gone way beyond their own tipping points. We’re now all decorated with sensors and devices on our bodies, in our homes and businesses. The amount of ’data-in-motion’ from anything that can emit data is predicted to rise from 4 to 44 zettabytes in this decade alone. Everyone writes about the tsunami of things but I predict the impact of Things and Cloud will be nothing compared to the tipping point for data. Especially the impact on the technology architectures of large enterprises who provide the services and products behind it all. In my position, I work with the world’s leading users of data, helping them do the right things as they re-platform their data layers and architectures. Here’s what they know about the tipping point for data that might help you. They Design With Real-Time in Mind We used to think that a megabyte was a lot of data to analyze. Now its not unusual to think in terms of a terabyte or even a petabyte of raw data coming from an infinite variety of self configured sources, boiled back down to megabytes or even a few kilobytes of intelligent analysis. It’s a kind of reverse funnel. While this growth of volume of data is important, the hardest problem hidden here for data scientists is the variety and complexity of new data types, then how to capture it and realize it’s analytic value in real-time. So volume, variety and complexity in real-time from an infinite variety from things such as self configured devices, infotainment, wearables, cybersecurity, sensors, vehicles and machines create the data tipping point for enterprises. It’s this that is accelerating changes in the IT architectures of big business and government. What this means is that top data users now design their architecture around how to create data schemas in real time, and to be able to use that data itself as the roadmap to analytics that can differentiate their decisions and build the requirements and reports to run the business.
https://medium.com/future-of-data/7-things-leading-enterprises-realize-about-the-data-tipping-point-that-might-help-you-4b8297df71a4
['Scott Gnau']
2016-06-03 17:22:49.056000+00:00
['Big Data', 'Data Science', 'Technology']
504
I played Tic-tac-toe on a quantum computer… and won!
A game of “Quantum” Tic-tac-toe. In 1950, Bertie the Brain delighted crowds at the Canadian National Exhibition. It was the first known computer game with a visual display. It was a customed-made electronic version of Tic-tac-toe. In 1952, Alexander Douglas created the first video game, called OXO, which simulated Tic-tac-toe on a general-purpose computer. It wasn’t likely that anyone outside of Cambridge played it. Ten years later, programmers at MIT created Spacewar!, which was installed on many PDP-1 Computers, including the one pictured here that I played with my daughter at the old STARTUP computer history exhibit in Albuquerque. Playing Spacewar! in 2012. Now, in 2021, the students in my class Introduction to Quantum Computing each created their own version of “Quantum” Tic-tac-toe, which can be played on a quantum computer. (We had to keep the Tic-tac-toe legacy alive! But, also, creating games is the best way to learn computational concepts.) I created my own version of Quantum Tic-tac-toe alongside the students with the help of some UTS Software Engineering interns. My version is pretty simplistic, but it’s meant to be modular so you can easily modify and test out new quantum moves. Each move adds a gate to the game circuit. The basic moves are as follows: Measure ends the round and executes the game circuit on the quantum device. The win conditions will be counted and displayed. ends the round and executes the game circuit on the quantum device. The win conditions will be counted and displayed. Not flips an “owned” tile to the other player. If the tile is not currently owned, this does nothing. flips an “owned” tile to the other player. If the tile is not currently owned, this does nothing. O turns the initial tile toward being owned by “O.” turns the initial tile toward being owned by “O.” X turns the initial tile toward being owned by “X.” turns the initial tile toward being owned by “X.” SWAP swaps the location of two tiles. As you play, you will see the game board and the circuit. The board shows the sequence of moves but is only indicative. The game circuit is the true state of the game — however, you need to understand a bit about quantum computing to appreciate what’s happening there. To play the game requires “executing” it on a quantum computer (or simulation). As of 1 Sep 2021, quantum computers are pretty small and with public access today you can play 2 x 2 Tic-tac-toe. The result of the above game took 10 seconds to run on a quantum device. Details on the game that was played above and ran on an IBMQ quantum computer. Okay, Tic-tac-toe, so what? Well, computer games have a long history of development alongside technology. As a teaching tool, games really are amazing. You can play Quantum Tic-tac-toe yourself on a quantum computer using the code below. You could even have a crack at modifying it to create your own set of rules. Enjoy!
https://medium.com/@csferrie/i-played-tic-tac-toe-on-a-quantum-computer-and-won-bbaca1c7bb82
['Chris Ferrie']
2021-09-01 03:54:14.952000+00:00
['Science', 'Computer Science', 'Quantum Computing', 'Technology', 'Games']
505
Deconstructing Theodore’s Manifesto (Part 2)
The Section entitled The Psychology of Modern Leftism begins by affirming that Almost everyone will agree that we live in a deeply troubled society. I do not know about almost everyone, I at least do agree with this statement. A society that venerates billionaires, desecrates culture, oppresses, spies and controls its own people, pretends the homeless are non-entities, allows and consents the abuse, neglect, rape and murder of children, destroys the natural world and treats non-human species as commodities is both deeply troubled and deranged. The author describes the modern leftists as socialists, collectivists, politically correct types, feminists, gay and disability activists, animal rights activists and the like. Which brings to mind the almost unfathomable case of Biology professor Bret Weinstein. Professor Weinstein’s unpardonable sin, according to the modern leftist students rallying against him, was to stand for principles of equity when in 2017 he opposed to submit to the organizers of a Day of Absence in which white people would not be allowed into Evergreen State College in Olympia, Washington. There is a huge difference, wrote professor Weinstein, between a group or coalition deciding to voluntarily absent themselves from a shared space in order to highlight their vital and underappreciated roles and a group encouraging another group to go away. The first is a forceful call to consciousness, which is, of course, crippling to the logic of oppression. The second is a show of force, and an act of oppression in and of itself. On a college campus, he added, one’s right to speak, or to be, must never be based on skin color. These statements created threats and protests against the professor, of which he wrote on the Twitter platform during the events, and the strong request from a group of students for him to be removed from his position. The case ended in professor Weinstein’s negotiated departure from the University. Facts and logic are no match against the imbecility of the masses. Modern leftists have seemingly agreed during the last decade that the best way to fight capitalism, all they judge as inappropriate behavior, sexual harassment, global warming, discrimination, terrorism and injustice of all kinds is via posts on the social platform Twitter, or a similar medium, preceded by a hashtag (#) like #MeToo #BlackLivesMatter #OscarsSoWhite #ShoutYourAbortion (yes, this exists in the same realm as you and me) #PrayForParis #ProtectOurWinters and many more. This trend has even been named as Hashtag Activism. Activists act. Spectators hide behind a hashtag and a screen. Mr. Kaczynski ends this brief section of his Manifesto mentioning Modern Leftists’ two major psychological tendencies: feelings of inferiority and oversocialization.
https://medium.com/@basteph/deconstructing-a-manifesto-part-2-2af678f9d63f
[]
2020-12-31 08:05:16.433000+00:00
['Activism', 'Technology', 'Society', 'Tech', 'Politics']
506
Why We Invested in Deliverr
Last month we announced our investment in Deliverr, the next-generation e-commerce fulfillment platform. Deliverr provides e-commerce marketplace sellers with a simple self-serve platform to optimally warehouse their inventory and ship it efficiently to their customer. Deliverr’s core focus is to use advanced AI and analytics to give sellers an Amazon-grade fulfillment technology, thereby reducing fulfillment costs and shipment times to 2-day (or less). Since this is such a big and important space, I wanted to share why we invested in Deliverr and why we think it’s an important new platform. Fulfillment is a large and growing industry in the United States. Companies spent $1.5 Trillion in the US on shipping costs in 2017 alone. Parcel delivery, a component of fulfillment spending, is estimated to be $99B a year and it is growing at 15% annually — a rate likely to be sustained over the next 2 decades. The global numbers are at least 3x these American figures. It’s also very clear that except for Amazon, the fulfillment industry is broken. Sequestered data pools prevent most companies in this massive industry from making data-driven decisions. Decisions on where to put freight, which 3PL warehouses to use, and which carrier to select for a given order are made in isolation because each market segment is fragmented and no players have integrated across the various verticals. Obscure, localized decision-making inevitably makes supply-chains inefficient. The fulfillment industry is broken with fragmented segments and no integration across verticals. A good analogy is taxis before the advent of Uber. Each taxi company operated in a kind of informational vacuum. Consumers in San Francisco would wait for long times to get a taxi, but taxi drivers in Marin — just north of San Francisco — would sit idle trying to find a ride. Uber and Lyft orchestrated demand and supply so folks in both San Francisco and Marin could more quickly flag rides and travel to their destinations. Fulfillment today is like the taxi industry in the 2000s. Merchants, warehouses and carriers all suffer through grinding inefficiencies because no one is orchestrating demand and supply, and ultimately transfer the costs of economic friction to end-consumers in longer shipment times and higher fulfillment costs. To-date, Amazon is the only organization that has tackled this problem head on, and consequently Amazon has built the world’s most sophisticated e-commerce fulfillment engine. All freight, 3PL, and shipping decisions employ end-to-end data analysis. First, freight is shipped in from overseas and is segmented to be placed at warehouses (“integrated”) across the country. Next, warehouse inventory is optimized to store goods closest to the consumers who will order them, varying on the time of year, regional tastes, the local language, etc. Ultimately, these calculations reduce transit times and carrier costs. The cheapest and fastest carriers are selected to bring goods to your doorstep. The results speak for themselves: Amazon is growing at a frenetic pace while eBay is stalling. Consumers are feeling the difference. Amazon delivers a large chunk of products under 2 days while the rest of the world is still stuck at 3–5 days. Less than 5% of Walmart’s products are enabled for 2 day shipping.Amazon packages travel a fraction of miles to the end consumer from the warehouse compared to other marketplaces like eBay. Amazon has over 2 million sellers who utilize their all-inclusive fulfillment services while the rest of the world must coordinate piecemeal with trucks, warehouses, and carriers to hold their delivery times and costs down. Like most industries, the fulfillment industry needs to transform into a smart enterprise. The fragmented supply-chain is begging for a player that can orchestrate logistics between all three segments to ensure network-level efficiencies are achieved. As venture investors, we look for large industries that need to be fixed, and it’s rare to find such a large industry which lacks a fundamental coordination layer. New companies can often only be built when fundamental shifts occur to unearth new market opportunities. The growth of 3rd-party marketplaces and e-commerce have created a prime opportunity for a new player like Deliverr to provide top-tier logistics orchestration to the fulfillment industry. Deliverr’s approach appeals to us for multiple reasons: a. Data driven– Harish has collected vast amounts of data from transit times to demand maps. He and his team have analyzed millions of data points to formulate their thesis, and then interviewed over 70 potential customers with fully detailed interview notes before writing the first piece of code. We liked that a lot. It is obvious to us that the ethos of Deliverr is data and the founding team wants to build things are going to create value from day 1 for their customers. As founders of Palantir we understand the value of data powered smart enterprise, which is exactly what is needed to repair this massively inefficient industry. b. Revenue driver vs cost center– Deliverr’s thesis is that fulfillment is a revenue driver, but is rarely treated this way. They have built the entire software stack with this thesis in mind. Today merchants can enable their items on fast shipping programs with both Walmart Free 2 Day Shipping and eBay Guaranteed Delivery, which is helping double their sales. c. Extreme transparency– Like most industries with a low net promoter score (NPS), fulfillment is plagued by poor transparency. Deliverr was built from day one to shed light on merchants. Whether the fulfillment costs or real-time location data on shipments, Deliverr’s data empowers merchants to run their businesses more efficiently. d. Self-Service– Interestingly, Deliverr’s entire platform is “self-serve”. I watched them analyze their pilot users’ full stories to determine where users became confused, and make the user interface completely intuitive. They took it upon themselves to make one of the hardest processes (sending physical products) accessible to any merchant, and today a merchant can sign up in a few minutes without talking to anyone. Deliverr’s team constantly obsesses over every single frustrated user session to simplify the complex process of coordinating fulfillment. This big investment will pay off and allow Deliverr to scale easily. As venture investors, we’d rather back entrepreneurs who think at scale. We only win when the companies we back are genuinely revolutionary. Deliverr’s data centric approach and their focus on user experience convinced us that they will be able to fix the entire fulfillment industry. As market dynamics shift, a good team will rapidly iterate on various solutions to the problems at hand. The team of seasoned engineers, data scientists, and fulfillment operations professionals that Harish and Michael have assembled could not be better suited to this massive challenge. With hires from Amazon, Google, and other top companies, Deliverr’s team has both unique hands on e-commerce fulfillment experience and experience building large-scale, software platforms. Harish’s modus operandi is to hire smart people with high self-esteem and low egos. We are impressed with how cohesive Harish’s team is, and the high degree of cooperation they exhibit. We are very excited to partner with Deliverr to fix the fulfillment industry. Alex Kolicich Partner, 8VC
https://medium.com/8vc-news/why-we-invested-in-deliverr-198876dbe35b
[]
2018-12-03 18:38:15.233000+00:00
['Logistics', 'Blog', 'Venture Capital', 'Technology', 'Fulfillment']
507
How to Reduce Your Carbon Footprint When Shopping Online
We all like to buy stuff online, but cross-country journeys, bulky packaging, and multiple deliveries per week take a toll on the environment. Here’s how to reduce your carbon footprint when online shopping. By Jason Cohen With in-store shopping curtailed due to the pandemic, online shopping exploded over Black Friday and Cyber Monday this year. Online Black Friday shoppers topped 100 million for the first time, an 8% increase from 2019, while Saturday shopping jumped 17% year over year, according to the National Retail Federation. But while online shopping is convenient, it requires an army of warehouses, planes, delivery trucks, and packaging that can wreak havoc on the environment. Amazon, for example, says its total carbon footprint increased by 15% in 2019, thanks to a combination of fuel emissions, electricity usage, and manufacturing. And while the company aims to reach net zero carbon by 2040, we don’t have to wait two decades to make a difference. When you’re shopping online this holiday season, here are some ways you can reduce your carbon footprint. Repair Over Replace We all like to have the latest and greatest tech in our lives, but before you click “buy,” consider whether you can repair your existing phone, tablet, or computer before upgrading. In 2007, the Information and Communication Industry made up 1% of the planet’s total carbon footprint, but that tripled by 2018 and is on pace to reach 14% by 2040, Fast Company reports. Most of those emissions come from mining the rare materials that go into making these devices—and the bigger the phone, the more CO2 is produced during the manufacturing process. So instead of contributing to the problem, see whether you can salvage your device. Companies such as Apple make it hard to get your device fixed, because they want you to buy a new phone or laptop instead, but there are options when it comes to iPhone repairs. Best Buy can also now fix your Apple devices at many of its locations around the country. Sometimes a laptop can appear to be on its last legs, but a quick virus scan or the right utility software can breathe new life into your device. We have how-to guides on a variety of gadgets, but iFixit is another good resource for DIY repairs. Buy Refurbished If your current device is beyond repair, consider replacing it with a refurbished device. In some cases, these are devices that were unboxed but not used and then returned to the manufacturer. Others were returned and needed repairs or upgrades before they were put back on sale as refurbished products at a reduced price. Apple, for example, sells refurbished versions of its iPhones, Macs, Apple Watches, and more, often for more than $100 off their initial prices. Amazon Second Choice sells pre-owned and refurbished items, while eBay is now selling certified refurbished products that you can buy directly from the brand. There is a lot to consider when it comes to buying refurbished; our guide to buying pre-owned tech will tell you what you should and shouldn’t buy used, where you can shop online, and more. Avoid Same-Day Delivery How many of us have placed an Amazon order for three or four items only to have them split up into separate boxes and delivery times? It may be convenient to get items as fast as possible, but when you’re ordering something that can wait a few days, Prime members can set up an Amazon Day, where all orders placed during the week will be delivered on a specific day—say, Saturday or Wednesday—to cut down on the number of trips a delivery truck makes to your house. You can set up an Amazon Day here or during the checkout process. You can also select longer delivery times when placing an order at Amazon (versus two-day or same-day Prime Now delivery) or retailers like Target or Best Buy. Amazon will sometimes throw in a digital thank you for slower delivery, like a Prime Video or Amazon Music credit. Choose Eco-Friendly Packaging Ordering online means having to deal with packaging materials, including cardboard, bubble wrap, plastic wrap, and packing peanuts. This leads to about 165 billion cardboard packages a year, which equates to around 1 billion trees. Shipping chilled foods also leads to 192,000 tons of waste each year just from freezer packs alone. Here are a few things you can do to reduce the waste. Aside from setting up an Amazon Day to bulk ship your Amazon orders once a week, Amazon also allows you to choose shipping options that create fewer total packages. When reviewing shipping options for your purchases, Amazon will note when a specific date will allow your order to arrive in fewer boxes. You can also buy products that use Amazon’s Frustration-Free Packaging program, which promises less packaging and recyclable materials while still protecting your purchases. Browse products that offer eco-friendly packaging here. Many meal kit delivery services use an excessive amount of packaging that can be difficult to properly discard. In our experience, EveryPlate limits the amount of total packaging, while Daily Harvest uses compostable materials. Many other retailers in various spaces are also beginning to offer sustainable packaging. It’s worth doing research to find the products that both meet your needs and are better for the planet. Stick to Local Shipping When shopping online, the distance between a seller and buyer is an important consideration. Packages shipped across the country will have a larger carbon footprint than ones delivered from a local store because the former requires more time on the road, in the air, or on a boat. When buying from an Amazon seller, you can check their official address by clicking the seller name to see where the business is located. If they are shipping the items themselves, knowing where it’s coming from can help you decide who to order from. However, many sellers also use Amazon’s fulfillment centers, which may be further away than the listed address. EBay lists where the item ships from, while Etsy allows you to specify geographic limits; select United States to weed out international shipping or enter a custom location. Shop Carbon Neutral Perhaps the best way to reduce your carbon footprint when shopping online is to simply shop at carbon-neutral businesses. Of course, this is easier said than done, because it’s such a new initiative for so many companies. If you’re looking to buy from a company that is committed to removing the same amount of carbon dioxide in the atmosphere that it is emitting, you will have to do some research. Forbes has a 2019 list of businesses that have made a push toward reducing their carbon footprint, which includes retailers Walmart, Target, Home Depot, Nike, Ikea, and more. For something a little more official, Climate Neutral is a nonprofit that verifies sustainability claims made by businesses, in order to do all the research on behalf of the consumer. Nearly 150 brands have been certified, and many more are committed and currently under review. Recycle the Packaging Ordering online means having to deal with all the packaging that comes with it. What are you supposed to do with it all? Amazon alone has paper mailers, plastic bags, cardboard boxes, insulation pouches, and so much more. Mashable provides a helpful breakdown of each type of packaging and how you can properly dispose of it. Be warned, though, because some of these materials can’t actually be recycled, and many others depend on policies and practices of your local recycling services.
https://medium.com/pcmag-access/how-to-reduce-your-carbon-footprint-when-shopping-online-703e60b8d61b
[]
2020-12-04 20:02:58.444000+00:00
['Carbon Footprint', 'Online Shopping', 'Green', 'Technology', 'Amazon']
508
The Beginners’ Guide to Automating Websites and Tests With Selenium
So What Do We Do With This Newfound Knowledge? With the basics laid out, let me show you some examples of how to apply Selenium in the real world to save yourself time and effort. Automate your posting queue Let’s say you have a Tumblr blog about, I don’t know, books spread open in nature, maybe. I think that’s what people use Tumblr for ever since the big ban in 2018. Let’s also assume you have a curation blog that reblogs the best content from a hundred other blogs — a couple posts each day. You spent a long time collecting 1,000 good posts, but Tumblr being Tumblr, you can only queue 200 of them at the same time. So instead you used something like Session Buddy to collect all the URLs in an efficient manner and saved them to a spreadsheet. I may or may not have a couple decaying spreadsheets that were decidedly not about books, so now they’d be completely useless, but I digress. Tumblr got sold for $3 million — that’s eased my pain. So with this long list of great, earth-shaking posts, you could now write a little Selenium script that logs into Tumblr, goes through 200 posts at a time, and adds them to the queue. You could even write randomized captions and experiment with Spintax to make it all look natural — not that anybody would do something like that, not with books. You could then even put this script on a scheduled task so it runs once per month and then loops once it’s done because no one will even notice that you’re recycling posts when your list is 1,000 long and lasts three months before looping over. Test, test, test Besides task automation, you’ll most likely be using Selenium for automated testing of anything that faces the interwebs. Here are some things you could easily set up to automate testing: Do an automated login into your customer dashboard every couple minutes to check that everything is working, send an email, or raise an alert if that fails. Download a file to ensure that the database is online and all connections run smoothly Check that these tasks don’t take considerably longer than they usually do; raise an alert if connection speeds are slow Automated reporting The last important thing Selenium can do for you is automated report generation. I already touched on this in the testing portion with the alerts for slow or nonworking connections, but let’s assume you were running a bunch of different ads on your Tumblr blogs about books and also had some affiliate links, and now you might want to get a daily reporting of which of these avenues performed best. You could manually log into all of those different websites, copy the daily sums, and transfer them into a spreadsheet — but why not automate that task? Or you can get yourself a daily digest of news headlines or stock prices. Other people pay money for those things. In fact, why not set up a paid service that supplies those values to hundreds of people who, in turn, happily give you their money for saving them hours of time and giving them an edge over the competition?
https://medium.com/better-programming/the-beginners-guide-to-automating-websites-and-tests-with-selenium-ac48b2fd33ec
[]
2020-07-19 10:22:35.936000+00:00
['Testing', 'Tdd', 'Front End Development', 'Programming', 'Technology']
509
How to Trace Your Containers Using End-to-End Supply Chain Management?
Tracing Your Cargo Containers Amidst COVID-19 Chaos With 2020 coming to an end, a year long of COVID-19 pandemic makes it evident that supply chains can no longer match the expectations of buyers and sellers when it comes to detailed tracking of their cargo. The problem goes beyond just tracking the cargo, as it also entails the current location and the condition of their goods. Due to the extreme shift in shipping sailings, reliable tracking information of containers has become almost impossible, which highlights the sudden need of an end-to-end supply chain management software. The pandemic of 2020 didn’t just highlight the weaknesses of current supply chain networks, but also provided us with the path towards being future-proof. All answers point towards one goal- interconnectivity between supply chain links. It may appear as if the problem so far has been lack of technological integration, but that is far from the truth. The problem, as well as the solution, is simply business-sided. Without incentivizing a shared-economy and a collaborative eco-system, individual entities will always repel inclusion. Now how does this relate to the topic of discussion- inability to track your containers- and how can we work around this challenge? The solution is to normalize and standardize intermodal data communications between various stakeholders of a supply chain network, so traceability of a cargo container is no longer reliant on one source of data. Seanode consolidates container related event alerts from various sources to ensure 99.99% reliability, even during the period of crisis. Before we dive deeper into data consolidation or interconnected supply chain networks, it is imperative to cover what all this entails; End-to-end supply chains typically entails several modes of transportation and a wide range of network stakeholders. The origin seller, as well as the end consumer of the goods have little visibility of the location and condition of the product during the transportation phase(s). Each set of phase has its own method of standardization, which makes it nearly impossible for the end nodes of a supply chain to have visibility into the movement of goods. ‍Identifying Common Data Points Due to the usage of different global standards for identification and tracking between the various transportation modes, it is often difficult to transmit all the information to relevant stakeholders in an intermodal transaction. Visibility of the shipment may be lost to parties other than the current transport operator until arriving at the destination of that transport mode. Buyers and sellers may find this highly inconvenient as they prefer to track their shipments preferably using their own shipment identifiers end-to-end. At the same time, neither party has the power to standardize the data points required for global track and trace. Cargo Container Tracking While Seanode as a non-regulatory body cannot provide a global set of data standardization, it can however use its AI capabilities to integrate with other data, logistics and terminal providers. This can allow technical consolidation of data via various sources, and returning back all container related tracking events back to buyers and suppliers. ‍ Supply Chain Management — “Network of Networks” As supply-chain links become more and more connected, the flow of information between relevant stakeholders become fluid. This is the essential building block towards gaining both transparency and traceability into your supply-chain. Using Seanode, you and your consumers can track the original source of the materials, how it was made, and how it was shipped to their country. This last element is not only important to maximize profitability but is a major step towards ethical sourcing of goods. ‍ Geofencing and Satellite Tracking The key to building trust and protecting the reputation of an entity is knowing the source of all the parts that make up a product. Leveraging technology behind digital supply-chains, organizations can improve supply-chain visibility by tracking both the movement and condition of shipments. IoT sensors and AIS technology measure the temperature of frozen or perishable goods, shock levels as fragile goods are moved, and the location of expensive items via the global positioning system (GPS). In doing so, shippers can help to ensure against spoilage, damage and theft. Having a geofence around all major operational ports in the world means 24/7 visibility of each and every single vessel in the world. Are you also struggling to track and trace your cargo containers? Let Seanode take care of it. Learn how Seanode’s supply chain management platform is contributing towards Geofencing technology and tech-enablement, so sellers and buyers can be empowered to navigate through crisis. You can reach our support team via [email protected]
https://medium.com/@m-ali-seanode/how-to-trace-your-containers-using-end-to-end-supply-chain-management-a06fcaa20e8a
['Munib Ali']
2020-12-10 11:16:28.706000+00:00
['Container Tracking', 'Containers', 'Shipping', 'Covid 19', 'Technology']
510
I Miss Running Out of Games to Play
I Miss Running Out of Games to Play Sixty dollars for a single game. That’s how much I could afford at my yearly pilgrimage to a gaming store back when the Xbox 360 and PS3 reigned supreme. Random bargain bin burrowing aside, the games on my shelf were sparse but my friends had it worse. Pocket money concerns meant that a game could cost them a meal when we were hanging out. Deciding on who bought which game was a recurring pastime in high school. Game swaps were our only hope. It helped us fill those annual Assassin’s Creed and Modern Warfare-sized holes in our game libraries without costing us our lunches. Game Boy cartridges and used game discs were the bloodlines of an avid gaming community of kids whose toughest challenge was getting the handheld gaming device in the first place. It might sound like a cause for pity, but it wasn’t. Far from it. We were the generation that dealt in dusty cartridges and scratched discs. The giddy joy on the ride back home after a game purchase is a fond memory that I can only cherish now. My mind would tunnel vision into one single thought: “We’re almost home.” I didn’t have to worry about day-one patches or 100GB-plus downloads. Dial-up connections would have killed our escapist dreams well before game prices could. Reload your Game Boy with a snazzy cartridge and you were good to go. All that stood between you and a portal from reality was a loading screen. And sure, the new PS5 and Xbox Series X|S consoles eliminate loading screens. But all they did was replace them with lengthy downloads. Preloading titles takes some of the hurt away but if you’re strapped for cash, you aren’t going to preorder $60 AAA games. And don’t get me started on season passes. Some physical copies now come with game codes, rendering second-hand discs obsolete. And services like EA Play and Xbox Game Pass mean that you might not even own the games you’ve sacrificed your pocket money for. I won’t deny that they’re lighter on the wallet. But these advances have come at a cost: the value of the games themselves. Who said you can’t experiment on a budget? Source: Image captured by Antony Terence. Modern values through the looking glass When one buys a game for $60, the decision comes with a fair bit of reflection. And while the worth of a title is certainly a subjective topic, it is still something an individual can compare on a per-game basis. I remember picking Halo: Reach because it had everything a home of two kids needed: a grim co-op narrative alongside some solid multiplayer options. While being on a limited budget did prove to be a barrier for my unquenchable gaming thirst, it made me value what I had. I left no virtual stone unturned in getting the most out of the games I had. Yes, even bargain bin games that critics threw under the bus. I even grew to love titles that barely command respect amidst blockbuster AAA titles. I’ve lost count of how many times I’ve replayed the campaigns of the games I own, reinforcing memories that remain imprinted in the recesses of my conscience. On my Game Boy, obscure game bundles were a chance to get more out of a bargain bin purchase. Sure, some of them would be bizarre and/or buggy takes on established franchises but they all kindled a sense of discovery in me. I miss experimenting solely because it isn’t the leap of faith it used to be. Bargain bin deals take on a whole new meaning when games drop to sub-dollar prices on Steam. It’s no surprise since they cut costs from distribution and retailer margins. And while indie games live or die by digital sales, spending $4 on creating a physical disc isn’t that bad of a deal. Most Steam users now have a library that they can’t possibly complete over the course of their lifetimes. But that doesn’t stop them from hunting for more. Few game enthusiasts actually attempt to finish all the games they purchase. And fewer still lose any sleep over it. Image by Todd Zlab on Dribbble. Truth be told, gaming has become affordable thanks to digital sales and subscriptions. Things might have been different if the good ol’ gamers had access to these purchase options (and decent internet). The then-spendthrift friends of mine would have certainly laid their hands on some fine titles and then some, minus the skipped lunches. But I wonder whether I could have appreciated the games as much as those that formed my childhood. Going to a store and diving headlong into the bargain bin were adventures in themselves. And dropping by a friend’s place to swap games is a pit stop I wouldn’t want to miss.
https://medium.com/super-jump/i-miss-running-out-of-games-to-play-5e76d86d0ed4
['Antony Terence']
2020-12-13 09:25:55.879000+00:00
['Gadgets', 'Features', 'Gaming', 'Culture', 'Technology']
511
R - Statistical Programming Language
R - Statistical Programming Language Let’s Not Make Coronavirus Stop Us From Learning A New Skill We should take advantage of this time to learn a new skill if we can. I have been learning the programming language R during the past few weeks. This article aims to provide an overview of the R programming language along with all of the main concepts which every data scientist must be familiar with Motivation The field of data science and quantitative development requires us to constantly adapt and learn new skills because of its highly dynamic and demanding nature. There comes a time in a data scientist's professional life when it becomes important to learn more than one programming language. Subsequently, I have chosen to learn R. This article will aim to outline all of the key main areas and will explain everything from the basics. It assumes that the reader is not familiar or has a beginner level understanding of the programming language R. I highly recommend R for many reasons that I will highlight in this article Photo by Cris DiNoto on Unsplash R is gaining popularity and it is one of the most popular programming languages. R was written for statisticians by statisticians. It integrates well with other programming languages such as C++, Java, SQL. Furthermore, R is mainly seen as a statistical programming language. As a result, a number of financial institutions and large quantitative organisations use the R programming language during their research and development. Python is a general-purpose language and R can be seen as an analytical programming language. 1. Article Aim This article will explain the following key areas about R: What Is R? How To Install R? Where To Code R? What Is A R Package and R Script? What Are The Different R Data Types? How To Declare Variables And Their Scope In R? How To Write Comments? What Are Vectors? What Are Matrix? What Are Lists? What Are Data Frames? Different Logical Operations In R Functions In R Loops In R Read And Write External Data In R Performing Statistical Calculations In R Plotting In R Object-Oriented Programming in R Famous R Libraries How To Install External R Libraries Plotting In R Let’s Start …. I will explain the programming language from the basics in a manner that would make the language easier to understand. Having said that, the key to advancing in programming is to always practice as much as possible. This article should form a solid foundation for the readers 2. What Is R? R is a free programming language under GNU license. In a nutshell, R is a statistical environment. R is mainly used for statistical computing. It offers a range of algorithms which are heavily used in machine learning domain such as time series analysis, classification, clustering, linear modeling, etc. R is also an environment that includes a suite of software packages that can be used for performing a numerical calculation to chart plotting to data manipulation. R is heavily used in statistical research projects. R is very similar to the S programming language. R is compiled and runs on UNIX, Windows, MacOS, FreeBSD and Linux platforms. R has a large number of data structures, operators and features. It offers from arrays to matrices to loops to recursion along with integration with other programming languages such as C, C++, and Fortran. C programming language can be used to update R objects directly. New R packages can be implemented to extend. R interpreter R was inspired by S+, therefore if you are familiar with S then it will be a straightforward task to learn R. Benefits Of R: Along with the benefits listed above, R is: Straightforward to learn A large number of packages are available for statistical, analytics and graphics which are open-source and free A wealth of academic papers with their implementation in R are available World’s top universities teach their students the R programming language, therefore, it has now become an accepted standard and thus, R will continue to grow. Integration capabilities with other languages Plus there is a large community support Limitations Of R: There are a handful of limitations too: R isn’t as fast as C++, plus security and memory management is an issue too. R has a large number of namespaces, sometimes that could appear to be too many. However, it is getting better. R is a statistician language thus it is not as intuitive as Python. It’s not as straightforward to create OOP as it is with Python. Let’s Start Learning R I will now be presenting the language R in quick-to-follow sections. Photo by Jonas Jacobsson on Unsplash 3. How To Install R? You can install R on: Ubunto Mac Windows Fedora Debian SLES OpenSUSE The first step is to download R: Open an internet browser Go to www.r-project.org. The latest R version at the point of writing this article is 3.6.3 (Holding the Windsock). It was released on 2020–02–29. These are the links: 4. Where To Code R? There are multiple graphical interfaces available. I highly recommend R-Studio A screenshot of R-Studio Download RStudio Desktop: Download RStudio from https://rstudio.com/products/rstudio/download/ RStudio Desktop Open Source License is Free You can learn more here: https://rstudio.com/products/rstudio/#rstudio-desktop RStudio requires R 3.0.1+. It usually installs R Studio in the following location if you are using Windows: C:\Program Files\RStudio 5. What Is R Package And R Script? R package and R script are the two key components of R. This section will provide an overview of the concepts. R Package Since R is an open-source programming language, it is important to understand what packages are. A package essentially groups and organises code and other functions. A package is a library that can contain a large number of files. Data scientists can contribute and share their code with others either by creating their own or enhancing others’ packages. Packages allow data scientists to reuse code and distribute it to others. Packages are created to contain functions and data sets A data scientist can create a package to organise code, documentation, tests, data sets etc. and the package can then be shared with others. There are tens of thousands of R packages available on the internet. These packages are located in the central repository. There are a number of repositories available such as CRAN, Bioconductor, Github etc. One repository worth mentioning is CRAN. It is a network of servers that store a large number of versions of code and documentation for R. A package contains a description file where one would state the date, dependencies, author, and version of the package amongst other information. The description file helps the consumers get meaningful information about the package. To load a package, type in: library(name of the package) To use a function of a package, type in the name of the package::name of the function. For example, if we want to use the function “AdBCDOne” of the package “carat” then we can do: library(carat) carat::AdBCDOne() R Script: R script is where a data scientist can write the statistical code. It is a text file with an extension .R e.g. we can call the script as tutorial.R We can create multiple R scripts in a package. As an instance, if you have created two R scripts: blog.R publication.R And if you want to call the functions of publication.R in blog.R then you can use the source(“target R script”) command to import publication.R into blog.R: source("publication.R") Create R Package Of A R Script The process is relatively simple. In a nutshell Create a Description file Create the R.scripts and add any data sets, documentation, tests that are required for the package Write your functionality in R scripts We can use devtools and roxygen2 to create R packages by using the command: create_package("name of the package") 6. What Are The Different R Data Types? It is vital to understand the different data types and structures in R to be able to use the R programming language efficiently. This section will illustrate the concepts. Data Types: These are the basic data types in R: character: such as “abc” or “a” integer: such as 5L numeric: such as 10.5 logical: TRUE or FALSE complex: such as 5+4i We can use the typeof(variable) to find the type of a variable. To find the metadata, such as attributes of a type, use the attributes(variable) command. Data Structures: A number of data structures exist in R. The most important data structures are: vector: the most important data structure that is essentially a collection of elements. matrix: A table-like structure with rows and columns data frame: A tabular data structure to perform statistical operations lists: A collection that can hold a combination of data types. factors: to represent categorical data I will explain all of these data types and data structures in this article as we start building the basics. 7. How To Declare Variables? We can create a variable and assign it a value. A variable could be of any of the data types and data structures that are listed above. There are other data structures available too. Additionally, a developer can create their own custom classes. A variable is used to store a value that can be changed in your code. As a matter of understanding, it is vital to remember what an environment in R is. Essentially, an environment is where the variables are stored. It is a collection of pairs where the first item of the pair is the symbol (variable) and the second item is its value. Environments are hierarchical (tree structure), hence an environment can have a parent and multiple children. The root environment is the one with an empty parent. We have to declare a variable and assign it a value using → x <- "my variable" print(x) This will set the value of “my variable” to the variable x. The print() function will output the value of x, which is “my variable”. Every time we declare a variable and call it, it is searched in the current environment and is recursively searched in the parent environments until the symbol is found. To create a collection of integers, we can do: coll <- 1:5 print(coll) 1 is the first value and 5 is the last value of the collection. This will print 1 2 3 4 5 Note, R-Studio IDE keeps track of the variables: Screenshot of R Studio The ls() function can be used to show the variables and functions in the current environment. 8. How To Write Comments? Comments are added in the code to help the readers, other data scientists and yourself understand the code. Note: Always ensure the comments are not polluting your scripts. We can add a single line comment: #This is a single line comment We can add the multiline comments using the double quotes: "This is a multi line comment " Note: In R-Studio, select the code you want to comment and then press Ctrl+Shift+C It will automatically comment on the code for you. 9. What Are Vectors? Vector is one of the most important data structures in R. Essentially, a vector is a collection of elements where each element is required to have the same data type e.g. logical (TRUE/FALSE), Numeric, character. We can also create an empty vector: x <- vector() By default, the type of vector is logical, such as True/False. The line below will print logical as the type of vector: typeof(x) To create a vector with your elements, you can use the concatenate function (c): x <- c("Farhad", "Malik", "FinTechExplained") print(x) This will print: [1] “Farhad” [2] “Malik” [3] “FinTechExplained” If we want to find the length of a vector, we can use the length() function: length(x) This will print 3 as there are three elements in the vector. To add elements into a vector, we can combine an element with a vector. For example, to add “world” at the start of a vector with one element “hello”: x <- c("hello") x <- c("world", x) print(x) This will print “world” “hello” If we mix the types of elements then R will accommodate the type of the vector too. The type (mode) of the vector will become whatever it considers being the most suitable for the vector: another_vec <- c("test", TRUE) print(typeof(another_vec)) Although the second element is a logical value, the type will be printed as “character”. Operations can also be performed on vectors. As an instance, to multiply a scalar to a vector: x <- c(1,2,3) y <- x*2 print(y) This will print 2,4,6 We can also add two vectors together: x <- c(1,2,3) y <- c(4,5,6) z <- x+y print(z) This will print 5 7 9 If the vectors are characters and we want to add them together: x <- c("F","A","C") y <- c("G","E","D") z <- x+y print(z) It will output: Error in x + y : non-numeric argument to binary operator 10. What Are Matrix? Matrix is also one of the most common data structures in R. It can be considered as an extension of a vector. A matrix can have multiple rows and columns. All of the elements of a matrix must have the same data type. To create a matrix, use the matrix() constructor and pass in nrow and ncol to indicate the columns and rows: x <- matrix(nrow=4, ncol=4) This will create a matrix variable, named x, with 4 rows and 4 columns. A vector can be transformed into a matrix by passing a matrix in the constructor. The resultant matrix will be filled column-wise: vector <- c(1, 2, 3) x <- matrix(vector) print(x) This will create a matrix with 1 column and 3 rows (one for each element): [,1] [1,] 1 [2,] 2 [3,] 3 If we want to fill a matrix by row or column then we can explicitly pass in the number of rows and columns along with the byrow parameter: vector <- c(1, 2, 3, 4) x <- matrix(vector, nrow=2, ncol=2, byrow=TRUE) print(x) The above code created a matrix with 2 columns and rows. The matrix is filled by row. [,1] [,2] [1,] 1 2 [2,] 3 4 11. What Are Lists And Factors? If we want to create a collection that can contain elements of different types then we can create a list. Lists: Lists are one of the most important data structures in R. To create a list, use the list() constructor: my_list <- list("hello", 1234, FALSE) The code snippet above illustrates how a list is created with three elements of different data types. We can access any element by using the index e.g.: item_one = my_list[1] This will print “hello” We can also give each element a name e.g. my_named_list <- list(“A”=1, “B”=2) print(names(my_named_list)) It prints “A” “B” print(my_named_list[‘A’]) It prints 1 Factors: Factors are categorical data e.g. Yes, No or Male, Female or Red, Blue, Green, etc. A factor data type can be created to represent a factor data set: my_factor = factor(c(TRUE, FALSE, TRUE)) print(my_factor) Factors can be ordered too: my_factor = factor(c(TRUE, FALSE, TRUE), levels = c(TRUE, FALSE)) print(my_factor) We can also print the factors in tabular format: my_factor = factor(c(TRUE, FALSE, TRUE), levels = c(TRUE, FALSE)) print(table(my_factor)) This will print: TRUE FALSE 2 1 We have covered half of the article. Let’s move on to more statistical learning 12. What Are Data Frames? Most, if not all of the data science projects require input data in tabular format. Data frames data structure can be used to represent tabular data in R. Each column can contain a list of elements and each column can be of a different type than each other. To create a data frame of 2 columns and 5 rows each, simply do: my_data_frame <- data.frame(column_1 = 1:5, column_2 = c("A", "B", "C", "D", E")) print(my_data_frame) column_1 column_2 1 1 A 2 2 B 3 3 C 4 4 D 5 5 E 13. Different Logical Operators In R This section provides an overview of the common operators: OR: one | two This will check if one or two is true. AND: one & two This will check if one and two are true. NOT: !input This will return true if the input is False We can also use <, <=, =>,>, isTRUE(input) etc. 14. Functions In R And Variables Scope Sometimes we want the code to perform a set of tasks. These tasks can be grouped as functions. The functions are essentially objects in R. Arguments can be passed to a function in R and a function can return an object too. R is shipped with a number of in-built functions such as length(), mean(), etc. Every time we declare a function (or variable) and call it, it is searched in the current environment and is recursively searched in the parent environments until the symbol is found. A function has a name. This is stored in the R environment. The body of the function contains the statements of a function. A function can return value and can optionally accept a number of arguments. To create a function, we need the following syntax: name_of_function <- function(argument1...argumentN) { Body of the function } For example, we can create a function that takes in two integers and returns a sum: my_function <- function(x, y) { z <- x + y return(z) } To call the function, we can pass in the arguments: z <- my_function(1,2) print(z) This will print 3. We can also set default values to an argument so that its value is taken if a value for an argument is not provided: my_function <- function(x, y=2) { z <- x + y return(z) } output <- my_function(1) print(output) The default value of y is 2, therefore, we can call the function without passing in a value for y. The key to note is the use of the curly brackets {...} Let’s look at a complex case whereby we will use a logical operator Let’s consider that we want to create a function that accepts the following arguments: Mode, x and y. If the Mode is True then we want to add x and y. If the Mode is False then we want to subtract x and y. my_function <- function(mode, x, y) { if (mode == TRUE) { z <- x + y return(z) } else { z <- x - y return(z) } } To call the function to add the values of x and y, we can do: output <- my_function(TRUE, 1, 5) print(output) This will print 6 Let’s review the code below. In particular, see where print(z) is: my_function <- function(mode, x, y) { if (mode == TRUE) { z <- x + y return(z) } else { z <- x - y return(z) } #what happens if we try to print z print(z) } The key to note is that z is being printed after the brackets are closed. Will the variable z be available there? This brings us to the topic of scope in functions! A function can be declared within another function: some_func <- function() { some_func_variable <- 456 another_func <- function() { another_func_variable <- 123 } } In the example above, some_func and another_func are the two functions. another_func is declared inside some_func. As a result, another_func() is private to some_func(). Hence, it not accessible to the outside world. If I execute another_func() outside of some_func as shown below: another_func() We will experience the error: Error in another_func() : could not find function “another_func” On the other hand, we can execute another_func() inside some_func() and it will work as expected. Now consider this code to understand how scope works in R. some_func_variable is accessible to both some_func and another_func functions. another_func_variable is only accessible to the another_func function some_func <- function() { some_func_variable <- "DEF" another_func <- function() { another_func_variable <- "ABC" print(some_func_variable) print("Inside another func" + another_func_variable) } print(some_func_variable) print("outside another func" + another_func_variable) another_func() } some_func() Running the above code will throw an exception in R-Studio: > some_func() [1] “DEF” Error in print(“outside another func” + another_func_variable) : object ‘another_func_variable’ not found As the error states, another_func_variable is not found. We can see DEF was printed which was the value assigned to the variable some_func_variable. If we want to access and assign values to a global variable, use the <<- operator. The variable is searched in the parent environment frame. If a variable is not found then a global variable is created. To add an unknown number of arguments, use … my_func <- function(a, b, ...) { print(c) } my_func(a=1,b=2,c=3) 15. Loops In R R supports control structures too. Data scientists can add logic to the R code. This section highlights the most important control structures: For loops Occasionally, we want to iterate over elements of a collection. The syntax is: for (i in some_collection) { print(i) } In the example above, the iterator can be a list, vector, etc. The snippet of code above will print the elements of the collection. We can also loop over a collection by using the seq_along() function. It takes a collection and generates a sequence of integers. While loops Occasionally, we have to loop until a condition is met. Once the condition is false then we can exit the loop. We can use the while loops to achieve the desired features. In the code below, we are setting the value of x to 3 and the value of z to 0. Subsequently, we are incrementing the value of z by 1 each time until the value of z is equal to or greater than x. x <- 3 z <- 0 while(z < x) { z <- z + 1 } If Else (optional) If Then Else are heavily used in programming. In a nutshell, a condition is evaluated in the if control block. If it is true then its code is executed otherwise the next block is executed. The next block can either by Else If or Else. if(condition is true) { # execute statements } We can also have an optional else: if(condition is true) { # execute statements } else if (another condition is true) { # execute statements } else { # execute statement } x <- 3 y <- 0 if (x == 3) { y <- 1 } else if (x<3) { y <- 2 } else { y <- 3 } Repeat If we want to repeat a set of statements for an unknown number of times (maybe until a condition is met or a user enters a value etc.) then we can the repeat/break statements. A break can end the iteration. repeat { print("hello") x <- random() if (x > 0.5) { break; #exit the loop } } If we want to skip an iteration, we can use the next statement: repeat { print("hello") x <- random() if (x > 0.5) { next #iterate again } } 16. Read And Write External Data In R R offers a range of packages that can allow us to read/write to external data such as excel files or SQL tables. This section illustrates how we can achieve it. 16.1 Read an Excel file library(openxlsx) path <-"C:/Users/farhadm/Documents/Blogs/R.xlsx" res <- read.xlsx(path, 'Sheet1') head(res) This will display the top rows: Snippet showing the contents of the excel file 16.2 Write To an Excel File columnA <- runif(100,0.1,100) columnB <- runif(100,5,1000) df <- data.frame(columnA, columnB) write.xlsx(df, file = path, sheetName="NewSheet", append=TRUE) It created a new excel file with sheet named as NewSheet: Snippet showing the contents of Excel 16.3 Read a SQL table We can read from a SQL table library(RODBC) db <- odbcDriverConnect('driver={SQL Server};server=SERVERNAME;database=DATABASENAME;trusted_connection=true') res <- sqlQuery(db, 'SQL QUERY') 16.4 Write To A SQL Table We can write to a SQL table sqlSave(odbcConnection, some data frame, tablename="TABLE", append=TRUE, rownames=FALSE) 17. Performing Statistical Calculations In R R is known to be one of the most popular statistical programming languages. It is vital to understand the in-built statistical functions in R. This section will outline the most common statistical calculations that are performed by data scientists. 17.1 Filling Missing Values: One of the most common tasks in a data science project is to fill the missing values. We can use the is.na() to find the elements that are empty (NA or NAN): vec <- c("test", NA, "another test") is.na(vec) This will print FALSE TRUE FALSE, indicating that the second element in NA. To understand them better, is.na() will return all of those elements/objects that are NA. is.nan() will return all of the NaN objects. It’s important to note that the NaN is an NA but an NA is not a NaN. Note: Many statistical functions such as mean, median, etc, take in an argument: na.rm which indicates whether we want to remove the na (missing values). The next few calculations will be based on the following two vectors: A <- c(1,2,5,6.4,6.7,7,7,7,8,9,3,4,1.5,0,10,5.1,2.4,3.4, 4.5, 6.7) B <- c(4,4.1,0,1.4,2,1,6.7,7,5,5,8,9,3,2,2.5,0,10,5.1,4.3,5.7) print(length(A)) #20 print(length(B)) #20 Both of the vectors, A and B, contain numerical values of 20 elements. 17.2 Mean Mean is computed by summing the values in a collection and then dividing by the total number of values: my_mean <- mean(A) print(my_mean) 17.3 Median Median is the middle value in a sorted collection. If there are an even number of values then it’s the average of the two middle values: my_median <- median(A) print(my_median) 17.4 Mode A mode is the most frequent value. R is missing a standard built-in function to calculate the mode. However, we can create a function to calculate it as shown below. distinct_A <- unique(A) matches <- match(A, distinct_A) table_A <- tabulate(matches) max_A <- which.max(table_A) mode<-distinct_A[max_A] print(mode) The function performs the following steps: Computes the distinct values of the collection Then it finds the frequency of each of the item and creates a table out of it Lastly, it finds the index of the term that has the highest occurrence and returns it as the mode. 17.5 Standard deviation Standard deviation is the deviation of the values from the mean. sd <- sd(A) print(sd) 17.6 Variance Variance is the square of the standard deviation: var <- var(A) print(var) 17.7 Correlation Correlation helps us understand whether the collections have a relationship with each other and whether they co-move with each other along with the strength of the relationship: cor <- cor(A, B) print(cor) We can pass in a specific correlation method such as Kendall or Spearman. Pearson is the default correlation method. Pass in the correlation method in the method argument. 17.8 Covariance Covariance is created to inform us about the relationship between variables. covariance <- cov(A, B) print(covariance) 17.9 Standardise and normalise the data set We are often required to normalise data such as by using min-max normalisation or calculate the z-score using the standardisation mechanism. Standardising data means having a data set with mean = 0 and standard deviation = 1. It requires subtracting the mean from each of the observation and then dividing it by the standard deviation. We can use the scale function. If we want to subtract the mean from each of the observation then set its center parameter to True. If we want to standardise data then we need to set its scale parameter to True. normal_A <- scale(A, center=TRUE, scale = FALSE) print(normal_A) standard_A <- scale(A, center=TRUE, scale = TRUE) print(standard_A) 17.10 Regression model A regression model is gaining popularity in machine learning solutions due to its simplicity and explainability. Essentially, the regression models help us understand the relationship between different variables. Usually, the coefficients are computed for one or more variables. These variables are regressors. They are used to estimate and predict another variable, the dependent variable. The dependent variable is also known as the response variable. The data for regressors is gathered through sampling and they are used to predict an outcome: bn are the coefficients which the linear models can estimate. xn are the independent variables. We are going to gather data for these variables and feed to the model. As an instance, let’s assume we gathered data set for temperature and want to predict the rainfall. We can use a linear model as shown below: Temperature <- c(1,2,5,6.4,6.7,7,7,7,8,9,3,4,1.5,0,10,5.1,2.4,3.4, 4.5, 6.7) Rainfall <- c(4,4.1,0,1.4,2,1,6.7,7,5,5,8,9,3,2,2.5,0,10,5.1,4.3,5.7) model <- lm(Rainfall~Temperature) Note, if there were multiple variables used to predict a variable such as using humidity and temperature to predict rainfall, we can use the lm() function and set the formula as: Temperature <- c(1,2,5,6.4,6.7,7,7,7,8,9,3,4,1.5,0,10,5.1,2.4,3.4, 4.5, 6.7) Rainfall <- c(4,4.1,0,1.4,2,1,6.7,7,5,5,8,9,3,2,2.5,0,10,5.1,4.3,5.7) Humidity <- c(4,4.1,0,1.4,2,1,6.7,7,5,5,8,9,3,2,2.5,0,10,5.1,4.3,5.7) model <- lm(Rainfall~Temperature+Humidity) We can then print out the summary of the model. R will inform us about the residuals, coefficients, their standard deviation error, t-value, F statistics and so on: print(summary(model)) It will print the following statistics: Call: lm(formula = Rainfall ~ Temperature) Residuals: Min 1Q Median 3Q Max -4.2883 -2.2512 -0.2897 1.8661 5.4124 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 4.8639 1.3744 3.539 0.00235 ** Temperature -0.1151 0.2423 -0.475 0.64040 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.933 on 18 degrees of freedom Multiple R-squared: 0.01239, Adjusted R-squared: -0.04248 F-statistic: 0.2258 on 1 and 18 DF, p-value: 0.6404 Based on the example above, the rainfall is -0.1151 * temperature + 4.8639 If we want to use the model to estimate a new value, we can use the predict() function whereby the first parameter is the model and the second parameter is the value of the temperature for which we want to predict the rainfall: temperature_value <- data.frame(Temperature = 170) rainfall_value <- predict(model,temperature_value) print(rainfall_value) 17.11 Bayesian model Bayesian model uses probability to represent the unknowns. The aim is to feed in the data to estimate the unknown parameters. As an instance, let’s assume we want to determine how much stock of a company will be worth tomorrow. Let’s also consider that we use the variable of company’s sales to estimate the stock price. In this instance, the stock price is the unknown and we are going to use the value of company’s sales to calculate the price of the stock. We can gather a sample of historic sales and stock price and use it to find the relationship between the two variables. In a real-world project, we would add more variables to estimate the stock price. The key concepts to understand are conditional probability and Bayes theorem. These concepts were explained in the article: Essentially, what we are trying to do is to use the prior probability of stock price to predict the posterior using the likelihood data and the normalizing constant. install.packages("BAS") library(BAS) StockPrice <- c(1,2,5,6.4,6.7,7,7,7,8,9,3,4,1.5,0,10,5.1,2.4,3.4, 4.5, 6.7) Sales <- c(4,4.1,0,1.4,2,1,6.7,7,5,5,8,9,3,2,2.5,0,10,5.1,4.3,5.7) model <- bas.lm(StockPrice~Sales) print(summary(model)) Notice that we installed BAS package and then used the BAS library. The results are then displayed: P(B != 0 | Y) model 1 model 2 Intercept 1.00000000 1.0000 1.00000000 Temperature 0.08358294 0.0000 1.00000000 BF NA 1.0000 0.09120622 PostProbs NA 0.9164 0.08360000 R2 NA 0.0000 0.01240000 dim NA 1.0000 2.00000000 logmarg NA 0.0000 -2.39463218 Different prior distributions on the regression coefficients may be specified using the prior the argument, and include “BIC” “AIC “g-prior” “hyper-g” “hyper-g-laplace” “hyper-g-n” “JZS” “ZS-null” “ZS-full” “EB-local” “EB-global” 17.12 Generating random numbers To generate random numbers between a range, use the runif function. This will print 100 random numbers between 0.1 and 10.0 random_number <- runif(100, 0.1, 10.0) print(random_number) We can also use the sample() function to generate items and numbers with or without replacement. 17.13 Poisson distribution We can use the Poisson distribution and use the glm model where the family is Poisson: output <-glm(formula = Temperature ~ Rainfall+Humidity, family = poisson) print(summary(output)) The results will be printed: Deviance Residuals: Min 1Q Median 3Q Max -3.2343 -0.8547 -0.1792 0.8487 1.8781 Coefficients: (1 not defined because of singularities) Estimate Std. Error z value Pr(>|z|) (Intercept) 1.69807 0.17939 9.466 <2e-16 *** Rainfall -0.02179 0.03612 -0.603 0.546 Humidity NA NA NA NA --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for poisson family taken to be 1) Null deviance: 35.460 on 19 degrees of freedom Residual deviance: 35.093 on 18 degrees of freedom AIC: Inf Number of Fisher Scoring iterations: 5 17.14 Normal distribution There are several ways to generate data with normal distribution. The most common way is to call the rnorm function with the sample size, mean and standard deviation: y <- rnorm(100, 0, 1) 17.15 Forward substitution Forward substitution is a common process used to solve a system of linear equations: Lx = y In this instance, L is a lower triangular coefficient matrix L with non zero diagonal elements. There are two functions that help us with forward and backward substitution. R has forwardsolve(A,b) for forward substitution on lower triangular A, and backsolve(A,b) for backward substitution on upper triangular A. Specifically, they are: backsolve(r, x, k = ncol(r), upper.tri = TRUE, transpose = FALSE) forwardsolve(l, x, k = ncol(l), upper.tri = FALSE, transpose = FALSE) r: an upper triangular matrix: R x = b l: a lower triangular matrix: L x = b Both of these trianglar matrix gives us the coefficients which we are attempting to solve. x: This is the matrix whose columns give the right-hand sides for the equations. k: It is the number of columns of r and rows of x that we are required to use. upper.tri: TRUE then use the upper triangular part of r. transpose: If True then we are attempting to solve r’ * y = x for y The output will be the same type as x therefore if x is a vector then the output will be a vector, otherwise, if x is a matrix then the output will be a matrix. 17.16 T-tests t-tests can be performed by using the t.test() function. As an instance, one-sample t-test in R can be run by using t.test(y, mu = 0) where y is the variable which we want to test and mu is the mean as specified by the null hypothesis: Humidity <- c(4,4.1,0,1.4,2,1,6.7,7,5,5,8,9,3,2,2.5,0,10,5.1,4.3,5.7) t.test(Humidity, mu = 5) The code above tests if the Humidity ≤ the mean of 5. This is the null hypothesis. The results are: One Sample t-test data: Humidity t = -1.1052, df = 19, p-value = 0.2829 alternative hypothesis: true mean is not equal to 5 95 percent confidence interval: 2.945439 5.634561 sample estimates: mean of x 4.29 18. Plotting In R This section will explain how easy it is to plot charts in R. X-Y Scatter Plot I have generated the following data: x<-runif(200, 0.1, 10.0) y<-runif(200, 0.15, 20.0) print(x) print(y) The code snippet will plot the chart plot(x, y, main='Line Chart') XY Scatter Plot Correlogram install.packages('ggcorrplot') library(ggcorrplot) x<-runif(200, 0.1, 10.0) y<-runif(200, 0.15, 20.0) z<-runif(200, 0.15, 20.0) data <- data.frame(x,y,z) corr <- round(cor(data), 1) ggcorrplot(corr) Histogram x<-runif(200, 0.1, 10.0) hist(x) 19. Object-Oriented Programming In R This section will outline the concept of object-oriented programming in R. It is vital to understand how objects are created in R because it can help you implement scalable and large applications in an easier manner. The foremost concept to understand in R programming language is that everything is an object. A function is an object too, as explained in the functions section. Therefore, we have to define a function to create objects. The key is to set the class attribute on an object. R supports object-oriented concepts such as inheritance. A class can be a vector. There are several ways to create a class in R. I will demonstrate the simplest form which revolves around creating S3 classes. This involves creating a list of properties. Before I explain how to build a full-fledged class, let’s go over the steps in a simplified manner: The first step is to create a named list, where each element has a name. The name of each element is a property of the class. As an example, this is how we can create a Human class in R: farhad <- list(firstname="Farhad", lastname="Malik") class(farhad) <- append(class(farhad), "Human") We have created an instance of a Human class with the following properties: firstname set to Farhad and lastname set to Malik. To print the firstname property of the instance of Human object, we can do: print(farhad$firstname) Now, let’s move on to another important concept. How do we create an instance method for the object? The key is to use the command: UseMethod This command informs the R system to search for a function. An object can have multiple classes, the UseMethod uses the class of the instance to determine which method to execute. Let’s create GetName function that returns the concatenated string of first and last name: #This is how you create a new generic function GetName <- function(instance) { UseMethod("GetName", instance) } #This is how you provide a function its body GetName.Human <- function(instance) { return(paste(instance$firstname,instance$lastname)) } GetName(farhad) To wrap it up, let’s create a class Human with properties firstname and lastname. It will have a function: GetName() which will return the first and last name. Trick: create a function that returns a list and pass in the properties as the arguments to the function. Then use the UseMethod command to create the methods. Human <- function(firstname, lastname) { instance <- list(firstname=firstname, lastname=lastname) class(instance) <- append(class(instance), "Human") return(instance) } GetName <- function(instance) { UseMethod("GetName", instance) } GetName.Human <- function(instance) { return(paste(instance$firstname,instance$lastname)) } farhad <- Human(firstname="farhad", lastname="malik") print(farhad) name <- GetName(farhad) print(name) This will print: > print(farhad) $firstname [1] "Farhad" $lastname [1] "Malik" attr(,"class") [1] "list" "Human" > > > name <- GetName(farhad) > print(name) [1] "Farhad Malik" What if we want to create a new class OfficeWorker that inherits from the Human class and provides different functionality to the GetName() method? We can achieve it by: Human <- function(firstname, lastname) { instance <- list(firstname=firstname, lastname=lastname) class(instance) <- append(class(instance), "Human") return(instance) } OfficeWorker <- function(firstname, lastname) { me <- Human(firstname, lastname) # Add the class class(me) <- append(class(me),"OfficeWorker") return(me) } If we create an instance of an office worker and print the instance, it will print the following contents: worker = OfficeWorker(firstname="some first name", lastname="some last name") print(worker) > print(worker) $firstname [1] "some first name" $lastname [1] "some last name" attr(,"class") [1] "list" "Human" "OfficeWorker" Note, the classes of the instance are list, Human and OfficeWorker To create a different function for the office worker then we can override it: GetName <- function(instance) { UseMethod("GetName", instance) } GetName.Human <- function(instance) { return(paste(instance$firstname,instance$lastname)) } GetName.OfficeWorker <- function(instance) { return(paste("Office Worker",instance$firstname,instance$lastname)) } This will print: > GetName(worker) [1] "some first name some last name" 20. How To Install External R Packages It’s extremely straightforward to install an R package. All we have to do is to type in the following command: install.packages("name of package") To install multiple packages, we can pass in a vector to the install.packages command: install.packages(c("package1","package2")) As an instance, CARAT is one of the most popular machine learning packages. R-Studio makes it extremely easy to install a package. To install CARAT, click on the Packages tab at the bottom right and then click on the install button. Enter “carat” and click on Install The dialog boxes will appear to indicate that the package is being installed: Once the package is installed, you can see it in the Console Window: The downloaded binary packages are in C:\Users\AppData\Local\Temp\Rtmp8q8kcY\downloaded_packages To remove a package, type in: remove.packages("package name") 21. Famous R Libraries Other than the R libraries that have been mentioned in this article along with the built-in R functions, there are a large number of useful R packages which I recommend: Prophet: For forecasting, data science and analytical projects Plotly: For graphs Janitor: For data cleaning Caret: For classification and regression training Mlr: For machine learning projects Lubridate: For time series data Ggpolot2: For visualisation Dplyr: For manipulating and cleaning data Forcats: When working with categorical data Dplyr: For data manipulation 22. Summary This article explained the following key areas about R: What Is R? How To Install R? Where To Code R? What Is A R Package and R Script? What Are The Different R Data Types? How To Declare Variables And Their Scope In R? How To Write Comments? What Are Vectors? What Are Matrix? What Are Lists? What Are Data Frames? Different Logical Operations In R Functions In R Loops In R Read And Write External Data In R Performing Statistical Calculations In R Plotting In R Object-Oriented Programming in R Famous R Libraries How To Install External R Libraries Plotting In R I explained the programming language R from the basics in a manner that would make the language easier to understand. Having said that, the key to advancing in programming is to always practice as much as possible. Please let me know if you have any feedback or comments.
https://towardsdatascience.com/r-statistical-programming-language-6adc8c0a6e3d
['Farhad Malik']
2020-03-31 18:02:33.764000+00:00
['Programming', 'Fintechexplained', 'Statistics', 'R', 'Technology']
512
Better JavaScript — State, and Array vs Array-Like Objects
Photo by Vidar Nordli-Mathisen on Unsplash Like any kind of apps, JavaScript apps also have to be written well. Otherwise, we run into all kinds of issues later on. In this article, we’ll look at ways to improve our JavaScript code. No Unnecessary State APIs can be classified as stateful or stateless. A stateless API provides functions or methods whose behavior depends on inputs. They don’t change the state of the program. If we change the state of a program, then the code is harder to trace. A part of a program that changes the state is a stateful API. Stateless APIs are easier to learn and use, more self-documenting, and less error-prone. They’re also easier to test. This is because we get some output when we pass in some input. This doesn’t change because of the external state. We can create stateless APIs with pure functions. Pure functions return some output when given some input. When the inputs are the same in 2 different calls, we get the same output each time. So instead of writing: c.font = "14px"; c.textAlign = "center"; c.fillText("hello, world!", 75, 25); We write: c.fillText("14px", "center", "hello, world!", 75, 25); The 2nd case is a function that takes inputs and doesn’t depend on the font and textAlign property as it does in the previous example. Stateless APIs are also more concise. Stateful APIs led to a proliferation of additional statements to set the internal state of an object. Use Structural Typing for Flexible Interfaces JavaScript objects are flexible, so we can just create object literals to create interfaces with items we want to expose to the public. For instance, we can write: const book = { getTitle() { /* ... */ }, getAuthor() { /* ... */ }, toHTML() { /* ... */ } } We have some methods that we want to expose to the public. We just provide this object as an interface for anything that uses our library. This is the easiest way to create APIs that the outside world can use. Distinguish Between Array and Array-Like Objects JavaScript has arrays and array-like objects. They aren’t the same. Arrays have their own methods and can store data in sequence. They’re also an instance of the Array constructor. Array-like objects don’t have array methods. To check if something is an array, we call the Array.isArray method to check. If it return false , then it’s not an array. Array-like objects can be iterable or not. If they’re iterable, then we can convert them to an array with the spread operator. For instance, we can convert NodeLists and the arguments object to an array: [...document.querySelectorAll('div')] [...arguments] We convert the NodeList and the arguments object to an array. If it’s a non-iterable array-like object, which is one with non-negative integer keys and a length property, we can use the Array.from method to do the conversion. For instance, we can write: const arr = Array.from({ 0: 'foo', 1: 'bar', length: 2 }) Then arr would be: ["foo", "bar"] Photo by Michael on Unsplash Conclusion We shouldn’t have unnecessary states in our program. Also, duck typing is good for identifying types. And we should distinguish between array and array-like objects. Enjoyed this article? If so, get more similar content by subscribing to Decoded, our YouTube channel!
https://medium.com/javascript-in-plain-english/better-javascript-state-and-array-vs-array-like-objects-7396a21c28c0
['John Au-Yeung']
2020-11-01 19:19:10.694000+00:00
['JavaScript', 'Programming', 'Web Development', 'Technology', 'Software Development']
513
Simple Yet So Powerful
I was always a fan of the butterfly keyboard on the 2015 MacBook and all MacBooks that were equipped with it after. I never had the unreliability issues and know if I did I would have a different opinion, but the feeling when typing on those keyboard, to me, was very satisfying. Recently, I used a 2014 MacBook Pro 13-inch and could reminisce on the keyboard before the butterfly design and though it had more travel I didn’t like how it felt less stable. Each key was a little more wobbly and didn’t have the same tactile feeling that I got from the butterfly version. Apple has since gone back to a scissor switch design on all of its MacBooks and I will admit I was a little hesitant on how they would be. I really did like the stability of the butterfly but knew I was lucky that I did not run into any issues with sticky keys or keys not working at all. After getting the opportunity to review the 16-inch MacBook Pro, when it came out, with the new Magic Keyboard I was thrilled. Apple could bring back the travel but simultaneously make the keyboard feel more solid and tactile. The new scissor switches feel perfect with the better travel but Apple also took some new things it learned from the butterfly design to make them feel more stable as you type. Close up of M1 Air’s new function keys. With the new MacBook Air, you also get new function keys at the top with a one for Spotlight search, Dictation and Do not Disturb replacing the keyboard brightness and Launchpad keys. I will say that I use Command + Space to trigger Spotlight so having a dedicated function key for me is a waste. I also don’t have any plans on using Do Not Disturb or Dictation so losing those keys for Keyboard Brightness is also a loss. This isn’t a huge problem since I hardly use the function row anyway and I really do like that Apple added the ability to quickly get to Emoji’s from the Function key now. One last thing on the keyboard, even though this isn’t really a key on the keyboard, Touch ID is fantastic. It has been awhile since I have used a MacBook again and love that so many more apps are starting to use Touch ID. 1Password, for instance, has been huge, not having to type in my long password in Safari anymore to use the extension is really awesome. Performance and Battery Because this a M1 I felt it was important to go a little more detail when it came to performance. I am in no way a hardcore power user. I do like to dabble in Xcode sometimes but even then I am not compiling a ton of code. The most I do on my machines that does require some performance is photo editing. I have Affinity Photo but many times when I am trying to get some photos ready for a post I use the Photo’s app to quickly edit them. Once thing I have noticed when using Photos on my old 2018 MacBook Pro 13-inch was the very slow performance in Photos. I had a quad-core i5 Processor with 8 GB of RAM and many times I would be sitting with the rainbow beach ball spinning after just clicking the Edit button on a photo. On this MacBook Air I don’t think I have seen the rainbow beach ball once. Nothing has felt slow, not even browsing the web. Ulysses has always been a champ, so I never really feel like I need much to get that app running but everything else I do on this MacBook Air feels snappy. I was concerned about getting an 8 GB variant over the 16 GB but decided to run my own test to see if this was a huge concern or not. I opened up Chrome, the new M1 supported version, and played about 10 YouTube videos all in 4K concurrently while keeping other tabs open with random websites. I then started to do other things on the MacBook Air like; open a bunch of apps, download a bunch more apps from the App Store, listen to music, write in Ulysses, and edit a few photos. The real test, though, was to see if any of the Chrome tabs were going to refresh when I opened them and unsurprisingly none did. Neither the ones playing the 4K videos, which they all still were, or any of the other websites that I had running as well. Everything was available for me once I clicked the tab, sitting in memory. This is in no way scientific and in no way representative to a number of other people who use computers. But for me, this was a good test on what I use my computer for. I like to have many tabs open when I am researching something and sometimes have music or videos playing in the background while doing other things. If I was compiling a ton of code, rendering 4K videos, or dealing with multiple gigantic files simultaneously I would probably lean more towards the 16 GB. But for me, I think saving the $200 was worth it. I couldn’t resist the urge of comparing the Intel MacBook Air and M1 MacBook Air since I had both at the same time for a while and all I can say is: Wow! I am glad I went with the M1! Intel Air on the left and M1 Air on the right. Just like my own test on RAM, benchmarks also don’t tell you the full story but it can provide a baseline on what each can do, after running benchmarks on both these machines it is incredible that the M1 MacBook Air can hit these numbers. Intel Air on the left and M1 Air on the right both using Intel version of GeekBench. Even when using the Intel version of GeekBench, using Rosetta 2, on the M1 it outperformed the i5 Intel MacBook Air. It is fantastic to see Apple’s hard work on creating chips for the iPhone and iPad all these years has paid off for their MacBooks.
https://medium.com/techuisite/simple-yet-so-powerful-4719f75d85db
['Paul Alvarez']
2020-11-28 17:29:53.739000+00:00
['Technology', 'Apple', 'MacBook', 'Review', 'Gadgets']
514
Full disclosure, always: Peer Mountain listed on Project Transparency
We are proud to announce that Project Transparency has listed Peer Mountain as a responsible and trusted member of the blockchain and cryptocurrency industry. As supporters of this voluntary disclosure standard, we will provide full disclosure of the wallets Peer Mountain controls, and provide a voluntary explanation of any expenditure greater than 0.5% of the funds collected. Project Transparency is a self-regulated venture that aims to improve regulatory procedures and disclosure in the cryptocurrency sector. Launched in 2017 by sixteen industry leaders, including Maecenas, Santiment, and Cofound.it, the initiative currently represents over $650M in market capitalisation. “With the rapid rise of digital currencies and proliferation of ICOs, investors increasingly want security regarding their funds and transparency on how they are administered.” Project Transparency website Self-governance and transparency are among Peer Mountain’s core values. In the face of stricter regulation of crypto markets around the world, we’re committed to developing and maintaining honest and open relationships with our investors, the crypto community, and lawmakers. As early supporters of ICOCharter.eu, a self-regulation charter for European ICOs, we’ve also established a fair and transparent ICO. Our CEO, Jed Grant, developed a comprehensive guide to ICO due diligence to help investors better evaluate not only Peer Mountain, but all crypto projects. This three-part series features a framework for ICO due diligence, advice on crypto valuation, and running a fair ICO. Transparency is vital to Peer Mountain’s economy of trust, and to strengthening investor confidence in the cryptocurrency market. We’re looking forward to helping Project Transparency reach its ambitious goals.
https://medium.com/peermountain/full-disclosure-always-peer-mountain-listed-on-project-transparency-9f8bfc25a9e6
['Peer Mountain']
2018-06-26 07:53:39.792000+00:00
['Cryptocurrency', 'Bitcoin', 'Blockchain', 'Technology', 'Ethereum']
515
This Raspberry Pi-Powered Monitor Can Deliver Car Engine Information at a Glance
Image: Paul Slocum via Reddit Maker Paul Slocum engineered a custom Raspberry Pi-powered OBD2 monitor that displays information about his 97 Honda’s engine via Bluetooth dongle. By Brittany Vincent When it comes to older cars, it can be difficult to keep track of specific stats. If you don’t have a Tesla with an advanced touch screen or even a newer car with a modern display, it’s easy to feel in the dark about what’s going on with your engine. That’s where Raspberry Pi comes in. Tom’s Hardware spotted Paul Slocum’s intriguing fix for car owners needing a monitor to display engine information using a Raspberry Pi as a base. His personal unit was designed to fit his ’97 Honda, down to the car’s aesthetic. The monitor automatically connects with the car via Bluetooth dongle and shares information when the engine starts up. “I wanted a small, customizable OBD2 monitor that automatically turns on and off with the car,and I couldn’t find quite what I wanted among existing apps and products,” Slocum wrote. The OBD2, as he explains, is a computer port in most modern cars that allows for real-time analyze of car engine data. Written and compiled with C++, the project, which uses a run-of-the-mill Raspberry Pi, also utilizes SDL (Simple DirectMedia Layer) for graphics as well as a touch interface. Slocum added a 3.5-inch Waveshare IPS GPIO screen to bring it all together. Though Slocum notes that the software “needs a little more work,” he’s planning on releasing it as open source with a customizable layout, fonts, graphics, and other data for different cars. It doesn’t need to be customized further as it will automatically connect to any Bluetooth OBD dongle. For those interested in learning more about how the project came together, Slocum outlined the process in a lengthy Reddit post. You can also follow the creator on GitHub for future updates.
https://medium.com/pcmag-access/this-raspberry-pi-powered-monitor-can-deliver-car-engine-information-at-a-glance-31acdd50afd8
[]
2020-12-15 19:03:09.681000+00:00
['Raspberry Pi', 'Technology', 'DIY', 'Cars']
516
Dev Tips: Searching GitHub Like a Pro
You’ve heard the saying before: don’t re-invent the wheel. In plain terms, if something already exists and has been proven to work, then it’s kind of a waste of time to go through the entire process of creating it on your own. In programming, I really couldn’t emphasize this point more. Now, there’s a time and place for coding things on your own: maybe it’s for school projects where you will learn from the process, or maybe it’s a software engineering job interview where you can’t just simply say, “Well, I would just use C++’s append() method for this.” However, in most cases, you’re going to want a solid reference — it’s not always the case where you’ll just copy and paste, but in my full-time software engineering experience, it’s so helpful to at least have something to go off of when writing my code. That’s where GitHub steps in: it literally has millions of lines of code — many of it relevant to the use case you’re trying to program — available at your fingertips. Searching through it to help you complete your daily tasks may seem painfully obvious when you’re reading this; maybe you’re wondering why I’m sharing this, but I honestly don’t see enough developers doing it. In fact, I see many proudly proclaim their Google-Fu and their Stack Overflow savviness, but often they can’t find everything they need and neglect the power of GitHub Search. How to search on GitHub the pro’s way First of all, you’re going to want to be signed in on GitHub in order to be able to search for all public repositories’ code. Next, just search what you want in the input on the top left of the toolbar (given the rapid rate of UI changes these days across the internet, this could change one day): Your new best friend. Let’s say I want to figure out how to use AWS DynamoDB’s ScanInput , but I can’t figure out how to get an example going just from the documentation alone or I just want to see other ways people are using it. So, I type “ScanInput” into the search box. I then select Code because I want to see actual snippets of code that pertain to ScanInput , and since I am programming in Go, I select Go. This part is crucial because PHP code won’t really help me that much. 1,000+ possible coding assistants. When you get your results, don’t assume every result is relevant to you; this is where your own knowledge has to come in. I see that the first results immediately are not really relevant to AWS DynamoDB, so I eventually keep scrolling and find some relevant results. Utilize a different sort In the screenshot above, you’ll notice that the results are sorting by “Best match”. I’m not sure what GitHub’s algorithm is for this, but it’s 100% worth trying different sorts if you’re having much luck. I really love the “Recently indexed” sort because it essentially shows the newest relevant code that people have pushed. I’ve just found that you’re more likely to get results that more closely align with your code’s versions, schemas, etc. My favorite sort selection. Searching for languages that don’t show up in the sidebar Let me show the sidebar that came up when we searched for ScanInput again to show you a potential issue you may come across: More likely than not, the language you want will be in this sidebar. However, there have been times where my language hasn’t been here. I used to think it’s just a limitation of GitHub, but then I realized a workaround: query parameters in the URL. When I selected Go, the query parameter l=Go was added to the URL. If I wanted some Python results, therefore, I could just pop in Python where Go was. Once I do that, Python results appear! Note that you can also do this with GitHub’s advanced search feature, which I will go over next. Advanced search If you click on “Advanced Search” underneath the Languages sidebar, you have some more power behind your search. I’ve found certain filters in here to be quite useful; I like to query based on file extension types, for instance. Special search filters cheat sheet You can leverage the special filters GitHub offers in the search box itself instead of using the visual interface. Under the “Languages” toolbar, you’ll see a cheat sheet available: For another reference on this special search syntax, check out the GitHub Help documentation.
https://medium.com/cloud-native-the-gathering/dev-tips-searching-github-like-a-pro-5de2e73cba3d
['Tremaine Eto']
2019-09-17 23:39:56.332000+00:00
['Technology', 'Software Development', 'Productivity', 'Programming', 'Coding']
517
How to write fast software
Premature optimisation is the root of all evil TL:DR Fast software, means fast enough to solve the problem it was meant to solve, as a result: Only optimise a piece of software to the point where it is fast enough for the business. To reach that level of performance, apply the following: Write your code in a way that prioritises clarity Is it fast enough for the business? If it is fast enough, good, stop. If it is not fast enough, find the bottleneck Weigh up your options, then fix the bottleneck Go back to “Is it fast enough for the business?” Key to this approach is not attempting to predict where performance bottlenecks will occur and preemptively fix them, instead: Deploy your software, then find the real bottleneck. What does “fast” mean? Fast is talking about the amount of time it takes for software to do some operation. Importantly, fast is a relative term, there needs to be some other thing that the code is being fast in comparison to. Saying that a piece of software is fast in isolation does not make sense, it can execute in a set period of time, like 3 seconds, is that fast? I don’t know, what is the context? Fast for a webserver response? Not really. Fast for a batch job? Yeah, probably. Defining fast is then a case of deciding what the thing is that the piece of software is being fast in comparison to. Most of the time, this will mean comparing the time taken by the software with the expectation from the business. Fast for a human facing webserver response might be less than 100ms. Fast for a batch job might be finishing in less than 5 minutes. Fast for a complex business process with human steps might be less than 48 hours. The critical thing here, is that “Fast” means “Fast enough to solve the business problem”. The optimisation loop Once there is a definition of fast, the following process can be used to achieve the goal of “fast software”. Write your code in a way that prioritises clarity Is it fast enough for the business? If it is fast enough, good, stop. If it is not fast enough, find the bottleneck Weigh up your options, then fix the bottleneck Go back to “Is it fast enough for the business?” That’s the basic process, so let’s look at each point in a bit more detail. Write your code in a way that prioritises clarity I’m a fan of writing code in as simple and clear a way as possible. The first pass of a piece of code I tend to prioritise clarity over speed. This comes from my experience of software usually being more than fast enough, but rarely clear or simple enough for the business. This step is probably the largest step, it basically covers implementing the entirety of the feature. Everything that comes after this is about analyzing and optimising the code written here. Is it fast enough for the business? This is the critical question, not “is it fast enough for you?”, or “is it fast enough for Google / Netflix / Amazon etc.”. The software needs to be fast enough to solve the problem it is trying to solve, beyond that, time spent making it faster is time that could have been spent doing something more useful. Most of the time the business will not tell you how fast something needs to be. Asking the question “how fast should this be?” is also a dangerous game, because the answer is likely to be “as fast as possible”. Making something as fast as possible is pulling on a string that will pull you down a very deep hole. So, given that you’re unlikely to get a very clear instruction of how fast something needs to be for the business, how do you know if something is fast enough for the business? It depends, the easiest way to find out is if the software goes out and nobody complains, then it is probably fine. This is not an ideal situation, but it is the way a lot of software is evaluated to see if it is fast enough. Put it out, it solves the problem, nobody complains about the speed, it’s fast enough, job done. It is possible to be more proactive in finding out if something is fast enough, if you are writing a website, often users will not tell you if the page loads fast enough or not, they will just leave. This is where using something like Google’s page speed tool can be helpful. They will give you an idea of how fast the page needs to be. When trying to work out how fast you need to be, it is worth considering the number of users and the time sensitivity of their tasks. As the number of users increases, typically the speed of the software matters more because it affects a lot of people. Is it a script being written for one user that doesn’t mind waiting 30 seconds? Is it a webserver that is serving a few hundred users? Are those users doing something that is time sensitive? — Like talking to a customer on the phone or trying to trying to bid against other people in real time? Is it a batch process for a large government agency that don’t mind if it takes a week? Is it a hedge fund where event nanosecond of latency costs money? Finding out how fast something needs to be can be quite difficult, most of the time people won’t bother and will release and see if it is fast enough. If that is the approach, then at least have some form of monitoring that allows you to see how fast you are actually going. If it is fast enough, good, stop The goal of this process is to write software that is fast enough, but no faster than necessary. Once the software has achieved the level of fast enough, stop, move on to writing something else that is going to provide value. I’m not saying that we slow software down deliberately so that it is just barely fast enough, only that if the software ever get to the point where we can stop working on it, that we take that opportunity. Optimising software for performance is a process that can eat as much time as you want to dedicate to it. The goal is to not dedicate more time to it than is needed, because that is time that could have been spent working on something else. Engineering time is expensive, you are more valuable if you are working on something that brings the business more value. If it is not fast enough, find the bottleneck In the situation where the software that has been written is not fast enough to solve the problem that it is intended for, then we are in the realm of performance optimization. Any piece of software will have a bottleneck somewhere, if it didn’t it would finish in the same instant that it started. The goal here is to find the current largest bottleneck in the piece of software. Finding bottlenecks can be quite tricky, I’d recommend getting some help, use tools that are designed for the purpose of debugging performance issues. In Java, there are a bunch of tools that hook into a running Java process to give all kinds of useful information. If you are writing some form of webserver, there are tools like NewRelic (other tools are available) that can provide breakdowns of where you are spending the most time in your code. There are a lot of different places a bottleneck can hide, for example: Database — is there a particularly hefty query? Does it have appropriate indexing? IO — are you trying to read enormous files from the file system? Is it an SSD or a spinny disc? Network — are you contacting a server a long way away? Sending large amounts of data across the network? Making a lot of round trips to some external service? Application configuration — how many incoming / outgoing requests can your threadpool handle concurrently? Is the app using recommended production settings? Are you using the values that came as default? CPU — are you consistently topping out your CPU at 100%? Does your application have the capability to utilise all the CPUs that have been provided to it? Memory — are you running out of memory? Are you filling up one of the memory generations in particular (for garbage collected languages)? These are some of the very broad range of possible bottlenecks you can have. There are so many possibilities and they can be so complex, that trying to predetermine which of these is going to be bottleneck is an almost impossible task. Only try and identify the bottleneck once you know you need to find a bottleneck. This is partly why premature optimisation is the root of all evil, trying to predict and fix bottlenecks will make the code more confusing and the bottleneck that is fixed is probably not going to be the largest bottleneck, rendering the premature fix mostly useless. Weigh up your options and fix the bottleneck So the largest bottleneck has been identified, could be a Database issue, Networking, IO, doesn’t matter. The next step is deciding how to fix it. Once the problem is clearly identified as a bottleneck, there are probably going to be a few different options on how to fix it. It could involve adding some caching, indexing, changing the infrastructure, updating the code to make fewer requests. The key here, is that the conversation on how to fix the bottleneck is only occurring once it has been identified as the largest bottleneck. That conversation happens in the wider context of the piece of software where things like changing the infrastructure are possibilities. Rather than the majority of performance improvement discussions that happen in reviews on things that are perceived bottlenecks being solved in a very local way. Once a solution has been decided, make the fix, release, monitor the impact and verify that it did actually solve the bottleneck. Back to “Is it fast enough for the business?” One bottleneck down, the next step is not to find the next bottleneck. Next, we need to ask if fixing the most recent bottleneck made the application fast enough for the business. Then we go back round the loop, fixing bottlenecks until we get to a point where the software is fast enough, then stopping at that point. Optimising people Not every business problem needs to be solved with technology. If there is some process that involves human steps and takes 48 hours, where waiting for people makes up 46 of those 48 hours and technology makes up the remaining 2 hours. Optimising the technology part of the process is probably not the bottleneck that needs solving. However, the same process can be applied (roughly speaking), to people and processes. If you don’t know what the process is, then probably a good first step in speeding it up is understanding what the process is then standardising it. Once the process is defined, ask if it is fast enough, and go round the loop of finding bottlenecks and fixing them until it is fast enough. Fixing bottlenecks in human processes might involve automation, it might just involve giving someone a second screen or better training. Summary The key thing I’d like to get across from this story is: Only optimise a piece of software to the point where it is fast enough for the business. Anything beyond that point is a waste of your expensive engineering time. There are always bottlenecks, the bottleneck in your software is probably not something you can predict, so do not try to predict it. Deploy your software, then find the real bottleneck. About the author Hi, I’m Doogal, I’m a Tech Lead that has spent a bunch of years learning software engineering from several very talented people. When I started mentoring engineers I found there was a lot of knowledge that I was taking for granted that I think should be passed on. These stories are my way of trying to pay it forward. I can be found on Facebook, LinkedIn or Doodl.la.
https://medium.com/swlh/how-to-write-fast-software-7cb4c10631bf
['Doogal Simpson']
2020-06-05 17:59:21.166000+00:00
['Software Development', 'Technology', 'Optimization', 'Software Engineering', 'Programming']
518
🥇 ▷ Guía Completa de la Metodología Ágile enfocada a la Gestión de Proyectos
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/mundo-framework/gu%C3%ADa-completa-de-la-metodolog%C3%ADa-%C3%A1gile-enfocada-a-la-gesti%C3%B3n-de-proyectos-a281ecde5222
['Pablo Álvarez Corredera']
2020-12-02 11:50:04.834000+00:00
['Startup', 'Projects', 'Work', 'Agile', 'Technology']
519
What To Build: Robert Kipp (General Superintendent Delta CM Team Laguardia Airport)
Conversation with Robert Kipp, General Superintendent Delta CM Team Laguardia Airport. We chat about what’s important to him when he looks for startup partners, his hesitations around AI, and what types of people the industry strongly distrusts. 1. Tell us about yourself, where you currently work, and your path on getting there. I am currently the General Superintendent and Director of Field Operations for Delta’s $3.8 billion renovation at LaGuardia Airport. My bachelor’s degree is in Criminal Justice and I had aspirations for a career in law enforcement, but after 9/11 I requested an active duty assignment and served two tours in Iraq. During my time in the Army was when I discovered I had a knack for logistics and managing complex teams. I really enjoyed my experience in the military and wanted my next career to have similar qualities. After some deep research on what industries fit the team work and problem solving profile that I was exposed to in the army, that naturally lead to construction management when I returned from my second tour. I worked for Clark Construction in Washington DC before coming up to New York City to work on the United Nations and Hudson Yards before the Delta LaGuardia project. I have settled into a little niche in the market building complex mega jobs in NYC. 2. Tell us about your role and what your mandate is and how this specifically relates to working with startups? As the General Super I handle all on-site coordination, logistics, quality control, labor relations, safety, and security, which is especially difficult when building an airport around an airport that serves 30 million passengers a year, and keeping it entirely operational in one of the busiest cities in the world. Throughout all of those aspects I focus on my staff and their performance and development. Construction, at its core, is a competitive team sport, and I want my team to perform at the top level. Technology, especially construction software, can be a force multiplier for my field staff, to ensure they have accurate, wholistic information at their fingertips as soon as possible. I keep my eye out for startups have the potential to be useful to my staff to increase efficiency and streamlines the flow of information. For me to spend time working with a startup I have to believe in their team and that their mission is going to make my day job easier. I will not work with teams that think they have all the answers because it stifles learning and growth if you believe you know everything already. 3. What are some of the interesting types of projects that you’re currently doing with startups? I have two way NDAs as part of the way I do engagements but I can list companies that I have worked with previously, such as Apple, Amazon, Bose, Plangrid, Rhumbix, and others. 4. What number of these projects move into production? By what criteria? One of the challenges we see startups facing is how to move a customer from pilot to production. Of the technologies we have looked at to adopt on site the biggest determinate factor is adaptability. What makes-or-breaks adopting a technology is determined during the rollout process. Is it simple and user-friendly? Does it simplify or decrease the time spent on that task? Does the software or website have a lot of bugs? The initial trial is where you learn about all of the hiccups and road bumps, and while every technology will have some, how easy or difficult it is to work through will determine whether or not the user will sign on for production. Construction is an old school environment, where guys are used to doing things the way they have for 20, 30, 40 years — if the user interface makes it more difficult or harder to navigate, it simply won’t be used. Startups need to listen to problems that need to be fixed and fix them without giving users a workaround. 5. What are the major challenges in your industry these days, and specifically ones that you think can be addressed by the right type of AI and or robotics application? Can you give some detailed examples? a) We have a severe shortage of personnel influx into the industry. What the industry needs, and what technology can help achieve, is cutting out the administrative waste. A field engineer can spend hours each week collecting daily reports, consolidating the information, running them around to the superintendents to sign, and hunting down missing ones. A coordinated, conformed drawing set on an iPad can save a superintendent hours just by eliminating running back into the trailers to check hard copies, or flipping between uncoordinated sets and RFI responses and submittals. All of the ways in which field staff spend time on administrative tasks can be completely eliminated and thereby add up lot of saved time, time that could be spent in the field and solving real issues. b) The capabilities of AI remain to be seen. Before AI can be useful we need big data on operations to analyze. In order to have big data on operations we need the resources to gather that information. Right now the question is: what do we need answers on and how do we go about collecting that data? How is that scalable within each job, and how does it extend across projects? Then, how do we tailor that big data to provide useful analysis to each individual site? For example, if we want data on access control we first need to set up the hardware to detect when people access the site and where they’re working. As a project builds and changes phases — from civil work to foundations, to coming out of the ground structure to interiors buildout — will that hardware be able to continuously give you the information you need? Will you have to change or relocate the hardware frequently? What are the specifics of your job site that need to be accounted for in the analysis of your operations? Across all aspects of the job AI has the potential to foresee major logistics or coordination mistakes before they happen, provide solutions, analyze operational efficiency and provide actionable data for superintendents to execute on in the field. c) We’ve seen robots automate and accelerate some interesting tasks, but the key to the future for robotics will be how adaptable they can be to an environment. So far there are a lot of robots that need a clean, clear, and pristine work space to function, which is the exact opposite of a construction site. Construction sites are dynamic and active with materials moving and work being put in place. Robotics startups need to look at how to have their technology adaptable to different environments, and be robust enough not only to handle change but also dust, dirt, and a few bumps and bruises. Construction is an industry that breaks things on a regular basis, and any tool that we use needs to be durable enough to hold up in our environment. 6. What type of startup would you be most excited to see? There are a lot of great startups out there that excel in one or two aspects of the job. What’s needed is a central hub that can integrate into each of those pieces to cross reference data, streamline the flow information, and act as a central site so that I don’t need to sign into one account to do NCRs and another account to do daily reports and another account to look at submittals. If you think about it, there are only three things you need to build a building: time, material, and workers to put that material in place. The value for those in management positions comes solely from the information we have and how we share it. A tool belt that has all of that information and efficiently gets each piece to everyone who needs it, without inundating each user with too much information, is the next key to success. 7. What should startups know about your industry before going in? What nuances or details about the industry are not so apparent from someone looking in? Construction is an old school industry, and unless your technology passes the dweeb test and works well from the moment it’s picked up, it will not be used. We might be dirty and sweaty but you are only going to get one shot at a first impression. We hate strongly distrust sales people and tech people so tread lightly, and remember we had a full time job before you brought me your new shiny box. Other advice: Before you do anything, spend some time on a job site feel it, see it, smell it. Look for solutions to real problems. Don’t solve problems that we don’t have. Make it quick and to the point. Construction is time = money so please never waste our time. Work just as hard as us. If I need tech support for your product and I can only call from 8am–4pm, then that’s not going to work for me. 8. Lastly, any recommended resources / reading (ex. Industry conferences, publications, experts to follow, etc.) for startups looking to build in your space? ENR future tech, Autodesk university, Ground Break, and World of Concrete are all shows you should check out. Talk to people that are really in the field not the folks who just want to talk about strategy.
https://medium.com/baidu-ventures-blog/what-to-build-robert-kipp-general-superintendent-delta-cm-team-laguardia-airport-17ece3826ca0
['Fang Yuan']
2019-11-15 16:02:10.035000+00:00
['Construction', 'Robotics', 'AI', 'Technology', 'Startup']
520
Building Yuki, A Level 3 Conversational AI Using Rasa 1.0 And Python For Beginners
Building Yuki, A Level 3 Conversational AI Using Rasa 1.0 And Python For Beginners Part 1: Setting up the base to program complex chatbots Fascinated by artificial intelligence powered chatbots or natural language understanding and processing but not sure where to start at and how to build stuff in this domain? You are at the right place. I have got you covered. Just read on and together let us unravel the simplicity of complex topics in our journey to build something marvelous. I would recommend you to have a quick read through the article once to get the overall idea, bookmark the link to this article, and then come back later to follow along with me in doing all the tasks I mention here. I have simplified the details to that extent that even if you are a middle school student, you can follow along well and learn all the aspects of bot design. All you need is the zeal and passion to build something and some basic understanding of how computers work and a little bit about programming! But yes, this tutorial is not only for beginners. Those with considerable experience might also find it helpful to kick start their development process. Before getting started I want you to know the main motivation in writing this one. To give a different perspective to what building something means To enable self-learners alien to this subject feel the ease at which something can be learned and finally to highlight what the future holds, it’s gonna be bots everywhere. Please note that I am writing this tutorial for a Windows 10 PC. However, a similar approach should work in Linux with very few modifications. If at any point in the article, any keyword, step, or technical term appears unclear, feel free to ask in the responses/comments and I will clarify it. With that being said, let us get started, keeping our end goal in mind. “Can you create an AI as complex as Jarvis, if not better?” Jarvis from Iron Man ✔️ Checkpoint 1: Understand What You Are Building A level 3 conversational AI is basically a computer program that can understand human language along with the intent of the speaker, the context of the conversation, and can talk like one. We, humans, assimilate natural language through years of schooling and experience. Interestingly, that’s exactly what we are going to do here, build a bot, parallelling how a human baby learns to converse. At the end of this project, here are the objectives we want to achieve, to create a bot that- can chitchat with you can understand basic human phrases can perform special actions can autonomously complete a general conversation with a human can store and retrieve information to and from a database (equivalent to its memory) is deployed on Telegram, Facebook Messenger, Google Assistant, and Alexa, etc. ✔️ Checkpoint 2: Setting Up Your Development Environment To build something incredible and quick, you will need great tools and easy to use frameworks. Here is my setup- Download and install the latest versions of these in your system. It is preferred to install them with default paths and settings. After installation, go to your search bar in desktop and type ‘Anaconda Prompt’ (terminal). Open it and create a new virtual environment for our project by typing the following command: conda create -n bots python=3.6 This will create a virtual environment called ‘bots’ (you can create an env with whatever name you want) where you can install all the dependencies of your project without conflicting with the base. It is a good practice to use virtual environments for projects with a lot of dependencies. Next, activate this virtual environment by typing (always ensure you activate it while makes any changes or running the program)- activate bots Now, install the rasa stack by typing the following command- pip install rasa It may take a few minutes to download all the packages and dependencies. There is a chance that you may encounter some error or warning. When you are stuck up somewhere, remember that Google has all the answers you want. Search your error message and you’ll find a solution in sites like StackOverflow. In a similar way, you can install any Python package in your activated environment. We will be using a library called Spacy in the future. Install it now. ( -u is used to install only for the current user without admin privileges) pip install -U spacy Then run the following command in terminal to download a spacy model. python -m spacy download en_core_web_sm ✔️ Checkpoint 3: Creating The Skeleton Of Our Bot We are using the open-source rasa-stack which a very powerful, easy to understand, and highly customizable framework for creating conversation AIs (hence the choice). Before we finalize the functionalities of the bot, let us do some magic with a single line of command. Create a new folder on your desktop and name it for your reference. I named mine as ‘Yuki’, the name I gave to my bot. the name I gave to my bot. Now, go back to the anaconda prompt terminal and reach your project directory by typing the following command: cd desktop\yuki Note that the syntax is cd followed the path to YOUR folder . followed . Next type the following command: rasa init You will see a prompt asking to enter the path to the project. Since we are already in the project folder (Yuki), you can just press ‘Enter’. Just follow along with the terminal. You will see that a core and an NLU model getting trained after which you should get a prompt to talk to the bot in the command line. Boom! you have just now built a bot! Talk to the bot, say a ‘hi’ maybe! So what the hell happened just now? You basically created a rasa project with the above command which initialized a default bot in your project folder. We will be working on top of this to build our bot. ✔️ Checkpoint 4: Understanding The Structure Of Your Bot Now, open the new folder you created before. You will see so many new files in it. This is your project structure. Here is a brief on each of them: Data Folder: Here lies your training data. You will find two files, nlu.md (training data to make the bot understand the human language) and stories.md (data to help the bot understand how to reply and what actions to take in a conversation). Here lies your training data. You will find two files, (training data to make the bot understand the human language) and (data to help the bot understand how to reply and what actions to take in a conversation). Models Folder: Your trained models are saved in this folder. Your trained models are saved in this folder. actions.py: This is a python file where we will define all the custom functions that will help the bot achieve its tasks. This is a python file where we will define all the custom functions that will help the bot achieve its tasks. config.yml: We mention the details of a few configuration parameters in this file. I’ll get back to this later. We mention the details of a few configuration parameters in this file. I’ll get back to this later. credentials.yml: To deploy the bot on platforms like Telegram or Facebook Messenger , you will need some access tokens and keys. All of them are stored in this file. To deploy the bot on platforms like or , you will need some access tokens and keys. All of them are stored in this file. domain.yml: One of the most important files, this is like an index containing details of everything the bot is capable of. We will see more about it later. One of the most important files, this is like an index containing details of everything the bot is capable of. We will see more about it later. endpoints.yml: Here, we have the URL to the place where the actions file is running. Once you are clear with the file structure, read ahead. So, how exactly does it work? Rasa has many inbuilt features that take care of a lot of miscellaneous things allowing you to focus more on the practical design of the bot from an application point of view. Don’t think too much about the inherent technical aspect at this moment. Follow along carefully, and you will find yourself building a complex AI at the end of this tutorial. ✔️ Checkpoint 5: Designing The User Experience (UX) Before you go any further, you need to decide the purpose of your bot. What will your bot be used for? Here are some generic ideas: Weather bot that can tell the user about various weather parameters in the required area. A Wikipedia based bot that can answer to any kind of general knowledge-based questions. A Twitter-based bot, that can update the user with what’s trending right now in the location of interest. A generic search bot that can search about something (Eg: Jobs) based on user’s queries. From a commercial standpoint, an ordering/booking bot using which the users can purchase goods like clothes, order food, or book tickets to movies, travel tickets, schedule appointments with professionals like doctors, lawyers, etc. In short, the possibilities are limited by your own imagination. Think of JARVIS or EDITH from the MCU. It is even possible to build something like that. Too many choices? I will design my bot here, which you can try to replicate, but I hope you can be a little more creative to build upon your own ideas. Whatever idea you have, the steps will be similar. As for Yuki, I will be demonstrating almost everything that is possible, with the end goal of creating an all-purpose conversational AI in a series of tutorials. In my first design, I want to demonstrate these two things: How to use APIs (this stuff is like magic, super useful and easy to use) Playing with custom functions Here’s what I want my bot to be capable of for now: Fetch the latest news or articles on the internet based on users topic of interest or search query Capability to handle the simple general conversations Now I will list down all the functions/actions that the bot needs, to achieve its capabilities; utter_hello handle_out_of_scope utter_end get_news (and so on…) Make a brief list like this for your bot. Next, I made a list of human intents the bot will have to detect from the messages users send it. Intents greet bye getNews affirm deny thank_you out_of_scope (Everything else for which the bot is not programmed yet for must be detected as out of scope) All this data goes into the domain.yml file. So, here’s how I expect my ideal user to interact with Yuki: User : Hi! : Hi! Yuki : Hola! I am Yuki. What’s up? : Hola! I am Yuki. What’s up? User : Get me the latest news updates : Get me the latest news updates Yuki : Give me a topic/keyword on which you would like to know the latest updates. : Give me a topic/keyword on which you would like to know the latest updates. User : <enters the topic name> : <enters the topic name> Yuki: Here’s something I found. <Links to news articles fetched> Yuki : Hope, you found what you were looking for. : Hope, you found what you were looking for. User : Thanks! : Thanks! Yuki: You're welcome. This is generally referred to as a ‘Happy Path’, the ideal expected scenario that happens when a user interacts with the bot. Whatever bot you want to design, have an idea of the basic conversational flow it is expected to handle, like this. Write down multiple flows with different possibilities for your reference. The above flow (along with any minor variations to it)is the basic expectation we want to achieve. Apart from this, we are going to teach Yuki how to respond to chitchat questions like ‘how are you?’, ‘what’s up’, ‘who made you?’, etc. Once you are ready, move to the next checkpoint. ✔️ Checkpoint 6: Building Your NLU Model First, let us teach our bot some human language, to identify the intents. Start Visual Studio Code, click ‘File->Open Folder’ and choose the project folder (in my case, Yuki). Open the data/nlu.md file in your code editor. You will already see some default intents in it. This is the place where you add data about every intent the bot is expected to understand and the text messages that correspond to that intent. Update this file with all the required intents in the same format. My finished nlu.md file will look like this: ## intent:affirm - yes - indeed - of course - that sounds good - correct - alright - true ## intent:bye - bye - goodbye - see you around - see you later - ttyl - bye then - let us end this ## intent:chitchat_general - whats up? - how you doing? - what you doing now? - you bored? ## intent:chitchat_identity - what are you? - who are you? - tell me more about yourself - you human? - you are an AI? - why do you exist? - why are you yuki? ## intent:deny - no - never - I don't think so - don't like that - no way - not really - nope - not this - False ## intent:getNews - Send me latest news updates - I want to read some news - give me current affairs - some current affairs pls - Find some interesting news - News please - Get me latest updates in [science](topic_news) - latest updates in [sports](topic_news) - whats the latest news in [business](topic_news) - send news updates - Fetch some news - get news - whats happening in this world - tell me something about whats happening around - interesting news pls - latest updates in [blockchain](topic_news) - get me latest updates in [astronomy](topic_news) - any interesting updates in [physics](topic_news) - I want to read something interesting - I want to read news - latest news about [machine learning](topic_news) - latest updates about [Taylor Swift](topic_news) ## intent:greet - hey - hello - hi - good morning - good evening - hey there - hola - hi there - hi hi - good afternoon - hey - hi ## intent:thank_you - thanks! - thank you As it is directly evident here, you are basically giving your bot the data to make it understand what words imply which intent of the user. These are some fundamental human intents. You can add more if you want. Also, notice how I divided the chitchat intent into chitchat_general and chitchat_identity for more specificity. To create a robust bot, be as specific as you can with your intents. Also, notice how I have placed some words in [] followed by () . These are the entities or the keywords with some significance in users’ text. To help the bot understand the same, this general syntax is used. And it’s as simple as that. You are done with NLU! Rasa will take care of the training part for you. Let’s now move on to the next checkpoint. ✔️ Checkpoint 7: Updating The Domain File The domain.yml file must contain the list of all the following stuff: Actions: These include the names of all the custom functions we implement in actions.py as well as the ‘utter actions’ These include the names of all the custom functions we implement in as well as the Entities: These are specific keywords present in user input which the bot can extract and save it to its memory for future use. For now, my bot requires only one entity which I named ‘ topic_news ’. This basically refers to whatever topic the user seeks the news. These are specific keywords present in user input which the bot can extract and save it to its memory for future use. For now, my bot requires only one entity which I named ‘ ’. This basically refers to whatever topic the user seeks the news. Decide upon what entities you need to extract based on your use case . For example, the Name of a person if you want to create a user profile internally, a geographical location, if you want to give the weather update, food items if your bot can order food online, email ids to subscribe the user to something, etc. . For example, the of a person if you want to create a user profile internally, a geographical location, if you want to give the weather update, food items if your bot can order food online, email ids to subscribe the user to something, etc. Intents: We have already discussed intents. In the domain file, you’ll have to include the list of the same. We have already discussed intents. In the domain file, you’ll have to include the list of the same. Forms: The form action is a special feature provided by rasa stack to help the bot handle situations where it needs to ask the user multiple questions to acquire some information easily. For example, if your bot has the ability to book movie tickets, it must first know details like the name of the movie, the showtime, etc. While you can program the bot to react according to user context, using forms is highly efficient in such situations. In my case, I have a getNews form action which we will explore further on how it can be implemented. The names of all the form actions you implement must be put up in the domain file. The is a special feature provided by rasa stack to help the bot handle situations where it needs to ask the user multiple questions to acquire some information easily. For example, if your bot has the ability to book movie tickets, it must first know details like the name of the movie, the showtime, etc. While you can program the bot to react according to user context, using forms is highly efficient in such situations. In my case, I have a which we will explore further on how it can be implemented. The names of all the form actions you implement must be put up in the domain file. Slots: In short, the entities and other internal variables that the Bot needs are stored in its memory as slots. In short, the entities and other internal variables that the Bot needs are stored in its memory as slots. Templates: These are like the blueprints of all utter actions. Utter actions need not be implemented anywhere else separately. But remember, they must be named in utter_<sample> format. You are essentially teaching the bot, the language, the text that it must use when it wants to send some message. You can have multiple text templates for each utter action. These are like the blueprints of all utter actions. Utter actions need not be implemented anywhere else separately. But remember, they must be named in format. You are essentially teaching the bot, the language, the text that it must use when it wants to send some message. You can have multiple text templates for each utter action. Check out my domain.yml file for reference, to understand how you should update your domain file with respect to your bot design. domain.yml There are so many customizations and variations possible. The format you see here is given by Rasa, and includes keywords that make use of its features like ‘buttons’. As you can see, your domain file contains a summary of everything the bot can do. Up to this point, we have hardly written a line of proper code, and we are about halfway done with the project! Notice how much ideation and brainstorming would go into building a high-level application even before you start coding. That’s the important part, the use case, and design. Anyone can write lines of code, but only a few can create brilliant products. Remember that a good programmer isn’t always a great product developer and a great product developer isn’t always required to be a good programmer. Highly functional AIs can have 100s of actions, intents, entities and be connected to huge databases. We are now taking the first steps towards this direction.
https://medium.com/the-research-nest/building-yuki-a-level-3-conversational-ai-using-rasa-1-0-and-python-493e163c7911
[]
2019-11-18 21:16:59.152000+00:00
['Technology', 'Chatbots', 'AI', 'Tutorial', 'Bots']
521
⭐️ PEP Network is highly rated ⭐️
in In Fitness And In Health
https://medium.com/pep-ico/%EF%B8%8F-pep-network-is-highly-rated-%EF%B8%8F-bedd1372a976
['Katherine Romaniuk']
2018-05-16 07:04:05.996000+00:00
['Blockchain', 'ICO', 'Token Sale', 'Cryptocurrency', 'Technology']
522
3 Days, 100 Participants, and a Scavenger Hunt: How Ro’s Hackathon Builds Technology and Camaraderie
This Fall, Ro launched its inaugural hackathon to drive technology innovations and spur new ideas that continue to improve the healthcare experience for patients and providers. Well-known to those working in the technology industry, a hackathon is a time-restricted, sprint event in which people with different skills and backgrounds come together to design and develop new technology products and solutions. Hackathons are designed to cultivate an environment of productivity, creativity, and ingenuity. Ro’s hackathon, dubbed the Rockathon (Ro + hackathon), brought together more than 100 participants and various cross-functional teams across three days of ideating, designing, and demoing more than 15 projects. In this spotlight, members of Ro’s Tech Org who led the Rockathon will share insights on how it came to fruition and the benefits it offers. Chloe Pi kicking off the Rockathon virtual demos. What motivated you to launch a hackathon at Ro? Chloe Pi, Chief of Staff: We launched the Rockathon with several goals in mind. First, we hoped to provide a dedicated time and space for Ro employees from different teams and backgrounds to get together to drive creative solutions to problems. We know that effective cross-functional collaboration can lead to new solutions that empower providers, such as Ro’s automated auditing systems, and features that benefit patients, like our newly launched in-visit product recommendations. Second, we wanted to present a team-building opportunity, to help foster relationships by working together to tackle unique challenges in a unique setting. And, finally, we were excited to create a new tradition at Ro, joining others like Ro’s Founders & Jam, informal breakfast meetings every Friday, Ro Passions sessions, fun and instructive lessons from Ro employees about a hobby or interest, and Work-It Wednesdays, meeting-free deep-work time every week. Raphael Schmutzer, Senior Product Designer: We hosted the Rockathon to give teams a chance to take a step back from their ongoing projects to brainstorm together, explore new ideas, and to find new ways of helping us reach Ro’s goals of expanding access to high-quality, affordable healthcare. The Rockathon allowed a temporary departure from (fully) focusing on our day-to-day or week-to-week initiatives, and to instead allow our team the flexibility to ask bigger questions and propose ambitious solutions to problems we had tried to address before or those we were just diving into for the first time. The Rockathon encouraged everyone to think months and years into the future to surface ideas that — if fully designed, built, and tested — could improve Ro’s platform and better serve patients and providers. Jillian Stein, Product Manager: We felt passionately about launching the Rockathon in a year when our team has gone completely remote. Ro has continued to hire aggressively throughout this year, so the Rockathon created a place for team members (people who previously sat next to each other in an office every day and people who have never met in-person) to bond in a fun, productive, and competitive environment. These relationships are so important — they encourage trust and communication across our teams well beyond the whirl-wind week of the Rockathon. “In its first iteration, the Rockathon has already become a staple of Ro’s culture. From the start, the event generated such energy and excitement across the organization, driving the teams’ ingenuity and momentum throughout the week. It was incredible to see such deep commitment to building a patient-centric healthcare system. Engineers, designers, and data scientists worked alongside physicians, pharmacists, and business stakeholders to brainstorm and design solutions to make healthcare more accessible and enjoyable for all. Perhaps most impressive was that the participants produced not just good ideas, but concrete solutions that could be shipped to benefit our patients — vastly exceeding our expectations,” said Saman Rahmanian, Co-founder and Chief Product Officer of Ro. What are some of the benefits of hosting a hackathon? Chloe: Hackathons are designed to not only build technology, but also camaraderie and cohesion among participants. This was clear throughout the Rockathon, as those participating began the event tackling tough challenges and tight timelines, and ended the week proudly celebrating the results of their work in our demo presentations. It was incredibly satisfying to see the Rockathon’s open and encouraging atmosphere drive the teams to think differently about problems, solutions, or technology. This paid off with ideas both bold, such as integrating voice-enabled features for patients, and practical, such as adding QR codes to treatment packaging. Raphael: The Rockathon was a source of energy and inspiration. There was a palpable excitement among those who participated, pushing one another to pursue their ambitious projects. Everyone at Ro was impressed during the company-wide presentations to see all that had been accomplished in just a few short days. As with any successful hackathon, the Rockathon not only revealed what was imaginable, but offered solutions that had a demonstrable value, clear functionality, and were a reasonable investment to build. This served as a reminder of one of Ro’s principles, that we must always be willing to, “look under the rock,” to find the best ideas and solutions to benefit our patients. Raphael leads his team’s demo during the Rockathon. What was your approach to designing a hackathon? Raphael: From the start, we designed the Rockathon to strike a balance of productivity and fun. One way we did this was to introduce two awards that served to fuel competition as well as keep the teams focused on Ro’s mission. We presented an award for the project that had the most potential to improve the patient experience and one with the greatest potential to improve patient safety. To break-up the fast-paced work throughout the week, teams also joined in for a virtual scavenger hunt and Jeopardy game with prizes. Chloe: Hackathons typically include people with technical backgrounds, but we didn’t want to limit ours to the Tech Org.To make this happen, we selected project ideas that were intentionally cross-functional. We spent time promoting the event, educating those new to hackathons about the process and the role they could play, and championing the value it would bring to Ro. As a result, we had strong participation not only from the Tech Org, but from employees in a variety of roles. For instance, our entire legal team participated, including the newest members who had just finished their onboarding! Jillian Stein presenting during the Rockathon demos. What made Ro’s hackathon unique? Jillian: At Ro, we have the privilege of working with talented people from such diverse personal and professional backgrounds (from engineers who have built fintech iOS apps to registered nurses with experience in emergency medicine). On any given day, a cross-functional team at Ro may include a legal expert, a physician, a product manager, and a marketing leader. Ro brings these kinds of perspectives together every day, and the Rockathon was no different. From the planning committee to the participants to the judges, the Rockathon reflected exactly the type of cross-functional collaboration we hoped to encourage throughout the week — the same kind of collaboration that we have become accustomed to at Ro. This cross-functional approach is what made our first Rockathon so successful, and it’s why we are looking forward to the next.
https://medium.com/ro-co/3-days-100-participants-and-a-scavenger-hunt-how-ros-hackathon-builds-technology-and-373c60b68ac2
[]
2020-12-14 14:23:47.251000+00:00
['Health Technology', 'Telemedicine', 'Hackathons', 'Company Culture']
523
Happy Independence Day — in more ways than one
Tomorrow, many of us in the US will be sitting back, barbecuing, lighting sparklers, and jumping into some body of water to celebrate our country’s independence from an antiquated system put in place by the British (said in jest to poke fun at my UK friends). What will I be doing? I’ll be celebrating the way most do, with my four children ranging in ages from 3 to 19, watching them play while I have a laptop and a phone at my fingertips. I am one of the lucky working parents out there who have the ability to maximize my time and increase productivity by declaring independence from a traditional office. This is a fancy way to say I work from home for a global tech company — Flock. What seemed impossible, just a decade ago, is now possible due to technology. As a global community, our advancement, especially when it comes to project collaboration apps and software, has impacted everyone, especially working moms and/or dads. Since I am a mom, breadwinner, and general manager of all things within the house, I can personally attest to the impact technology has had on my career and my family. How was this made possible? Today’s work-from-home capabilities have evolved and are much more sophisticated than conference calls from the late 90s, which is when I started my career. I, along with many of my colleagues in the industry, can now collaborate on documents in real time, and access email, IM, video conferencing, cloud storage, and other project collaboration technologies that make it possible for work to be done outside the office. On a personal level, this technology allows me to shave off three hours on a daily commute, something that is very common in the Silicon Valley. Those three hours are precious and can be productive. I can use this time to develop a strategy for global expansion or to pick up a sick child from school. From a personal point of view, the evolution of team communication tools is groundbreaking for families. I know a lot of parents who feel the same way. We are no longer tethered to one location with limited options. We are able to completely throw ourselves into our work without sacrificing quality, promotion opportunities, and/or our family life. I can take video conference meetings at 6 a.m. with colleagues in Mumbai (though I start off every conversation excusing my appearance) and collaborate with my team in Brazil at 3 p.m. I can hold discussions with my Russian team without worrying about the time difference or the commute. All thanks to modern day team collaboration tools! But it’s not all sunshine and roses. Sometimes, I’m tired or I set the alarm clock for 6 p.m. as opposed to a.m., an event that occurred merely four days ago (shhhhh). That said, I don’t know a parent who hasn’t done this at one time or another. The beauty of working remotely, and via technology tools is that I can connect easily with my colleagues spread across the globe. My team doesn’t have to wait for me to do the 2½ hour prep/drive down to the office to speak with me. The technologies I personally use enable flexibility and independence, and help me build my career. They also make me a better mother and US citizen. I can work on strategy sessions to expand Flock’s global reach, help the local clean up project, and then review my child’s report card at 2:45 p.m. Technology has liberated me in ways that I never thought possible and certainly in ways our forefathers/mothers hadn’t even imagined. We built this nation on the backs of people (all people) who were brave, hardworking, and innovative. They challenged the norm so that their descendants could experience a new world, challenge the status quo, and even let some of us working parents “have the best of both worlds”. We aren’t perfect…just ask my children who point this out to me on a daily basis. That said, I’m hopeful that we, as a nation and a tech community, will continue to grow, improve, and become even more family friendly. Some tools that have helped me become a productive person: Flock Google Docs Superplus Timezone Venmo P.S. I am literally typing this from a friend’s home 100 miles away from San Francisco. My boy is turning six tomorrow and he wanted to get away from the fog of the Sunset District. Thank you, Flock! Happy 4th, moms and dads. Signed, Mother working outside in the sun *Please note that these are my personal favorites and are in no way endorsed by my amazing employer FLOCK. -Authored by Christina Andrea Sarracino, proud patriot, PR expert at Flock by day, and mother of four by night and day.
https://medium.com/flock-chat/happy-independence-day-in-more-ways-than-one-2041bdb7ac82
['Christina Andrea Sarracino']
2017-09-11 10:39:38.874000+00:00
['Technology', 'Light Reads', 'America', 'People And Stories', 'Work Life Balance']
524
Solidity Fundamentals: Types
Fixed-sized byte arrays — bytes1 (byte), bytes2 … bytes32 The value types bytes1 , bytes2 , …, bytes32 includes a sequence of bytes from 1 to 32. The keyword byte is alias for byte1 . Beside, fixed-sized value type has got member function named length which yields the fixed length of the byte array (read only). And also bytes with a fixed-size variable can be passed between contracts. The type byte[] is an array of bytes, but due to padding rules, it wastes 31 bytes of space for each element (except in storage). It is better to use the bytes type instead. We use hex representation to initialize them in Solidity:
https://medium.com/coinmonks/solidity-fundamentals-c94460e3be3d
['Ferdi Kurt']
2021-01-29 05:25:57.850000+00:00
['Ethereum', 'Smart Contracts', 'Blockchain', 'Cryptocurrency', 'Technology']
525
The world’s first blockchain UGC game, CardMaker, is now launching an idea collection from players, with a prize of up to 1,000USD
The world’s first blockchain UGC game, CardMaker, is now launching an idea collection from players, with a prize of up to 1,000USD CardMaker Nov 15, 2019·3 min read After three years of development and two years of polishing in the field of blockchain gaming, CardMaker officially announced that the game is now preparing to land on Steam! Who we are? CardMaker is a interdisciplinary indie games which won the first prize in the Shanghai International Makers NEO Blockchain Black Horse Competition 2018, Best Game Award & Most Popular Award in the first NEO Blockchain Development Competition 2018, Top 10 Blockchain Game in Golden-Tea Award 2019, Sina Game Award in CGWR 2019, and is the first batch of ecology partners with Cocos-BCX, and is repeatedly reported by DappReview, Dapp.com. Its fresh and cute game style, as well as the depth of the strategy can comparable to Hearth Stone, Slay the Spire, Night of Full Moon and other high-quality games. What’s more, CardMaker opens the door for players to create their own cards, roles, modules, then realize game dream! Our future As we all know, Steam has about 7 million active users in China. The top four are 21 million users in the United States, 13 million in Russia, 8.7 million in Germany and 7.3 million in Brazil. China is the only Asian country in the top 10. How to let the conventional players on Steam understand the operation of blockchain and enjoy the fun brought by blockchain games will become the primary purpose of CardMaker. Now, the game is still in intensive development for access to Steam, but the official registration has been made on Steam. We believe it will be updated gradually in the future. Your idea matters In order to let more players participate in the game, and through the feedback of players constantly improve the quality of the game. We now start to collect game ideas from all players! Any good UGC solution is always welcome. You can come up with a complete and logical UI design, and you can provide great art style, material, or even complete module stories. We will provide a total of 25 000 000 CAKE and 1000 USDT(ETH or NEO) as a design bonus pool (the bonus is from UGC pool), and we will choose according to the quality of the materials. When each update, bonuses will be delivered and town selected by the player. The materials and the complete scheme can be sent to [email protected] via email in the form of text/document. Turn your ideas into a fully executable solution to grow with the game you’re playing. Follow us :) Website: https://www.cardmaker.io Game: Asia node: https://neo.cardmaker.io/?from=medium American node: https://neo2.cardmaker.io/?from=medium User Center: Asia node: http://neoa.cardmaker.io American node: http://neoa2.cardmaker.io Twitter: https://twitter.com/cardmakerio Telegram: https://t.me/CardMaker Discord: https://discord.gg/tNjgXUW Facebook: fb.me/cardmaker.io
https://medium.com/@cardmaker/the-worlds-first-blockchain-ugc-game-cardmaker-is-now-launching-an-idea-collection-from-players-8007ffe22b0d
[]
2019-11-15 07:51:37.019000+00:00
['Game Development', 'Blockchain', 'Blockchain Startup', 'Blockchain Technology', 'Games']
526
PCI Rules for Storing Credit Card Numbers in a Database
Many web developers and software programmers design platforms that require digital payments. It is important that developers of payments solutions understand how and why their solution handles cardholder data (CHD). There are many reasons why a solution might want to store that data, either short or long term, including payment processing, transaction history, or recurring billing. Consumers assume that merchants and financial solutions will handle this data in a secure manner to thwart theft and prevent unauthorized use. The reality is that many merchants may not be aware that they are storing CHD. Industry research indicates that up to 67% of merchants today are storing unencrypted cardholder data. The Payment Card Industry Data Security Standard (PCI-DSS) is a widely accepted set of policies and procedures intended to optimize the security of credit, debit and cash card transactions and protect cardholders against misuse of their personal information. A set of requirements set forth by the PCI Security Standards Council (PCI-SSC) and supported by the major card brands, PCI-DSS requirements apply to all entities that store, process or transmit cardholder data. PCI-DSS requirements state that cardholder data can only be stored for a “legitimate legal, regulatory, or business reason.” In other words: “If you don’t need it, don’t store it.” Those with a legitimate business reason to store cardholder data must understand what data elements PCI-DSS allows them to store and what measures they must take to protect that data. It is important to note that these statements apply to Cardholder Data (16-digit Primary Account Number, expiration date, cardholder name), and do not apply to Sensitive Authentication Data (Track Data, PIN, PIN Block, CVV). Sensitive Authentication Data (SAD) can never be stored after authorization. If cardholder data is to be stored, PCI compliance requirements state the cardholder data must be rendered unreadable using industry-standard techniques. Credit Card Data: What is Allowed to be Stored Validating entities are permitted to store data classified as Cardholder Data (CHD). This data includes the 16-digit primary account number (PAN), as well as cardholder name, service code, and expiration date. Traditionally, this data is located on the front of the card (EMV chip data is not Cardholder Data and cannot be stored after authorization). Credit Card Data: What is Not Allowed to be Stored Sensitive Authentication Data (SAD) can not be stored after authorization of a transaction. This data includes the full magnetic stripe data found on the back of the card, as well as any equivalent data on the EMV chip or elsewhere. SAD also includes the CVV (or equivalent data) as well as the PIN and PIN block. This data is extremely valuable to attackers for use in both card-present and card-not-present environment. PCI Requirements for Storage of Cardholder Data The PCI-DSS is defined by twelve PCI requirements, broken down over 220 sub-requirements. For the purposes of this discussion, we will focus on Requirement 3: Protect stored cardholder data. Requirement three can be broken down over multiple sub-requirements. The overarching principle is that limiting, prohibiting, and deleting stored cardholder data eliminates a key target for cybercriminals. Merchants that do not store cardholder data are much less likely to suffer an expensive, time consuming, and reputationally damaging breach of their customers’ personal data. It is important to know the definitions and differences between Account Data, Cardholder Data, and Sensitive Authentication Data. Account Data represents all the data that can be found on a credit card. Account Data is further broken down into either Cardholder Data (CHD) or Sensitive Authentication Data (SAD). Cardholder data (CHD) includes the 16-digit PAN, expiration date, and cardholder name. This data is traditionally (but not always) represented on the front of the card. Storage of cardholder data should be limited to what is necessary to meet legal, regulatory, or business needs. Sensitive Account Data (SAD) includes the sensitive track data held by the magnetic stripe, CVV, PIN, and PIN Block. This data can never be stored after authorization. The only entity that may store SAD is an issuer, and only under specific conditions and rationales. PCI Rule 3.1 PCI-DSS requirement 3.1 lays out the methodology necessary to ensure that cardholder data is limited to that which is necessary for legal, regulatory or business needs. This requirement states that validating entities must develop data retention policies, secure deletion policies, as well as a quarterly process to identify and remove any cardholder data that exceeds the retention period. This quarterly process is required whether or not the entity is aware they are storing cardholder data. There are many tools and techniques that can be used to identify this data. Most common is the use of a data discovery tool combined with best practices to protect against a physical compromise. PCI Rule 3.2 PCI-DSS requirement 3.2 states that Sensitive Authentication Data (SAD) cannot be stored after authorization, even if it is encrypted. (Encryption changes plaintext into ciphertext.) SAD includes the full track data, CVV, and PIN data. This data is extremely valuable to attackers for use in fraudulent transactions over both card-present and card-not-present transactions. The only entities that can store this data are issuers with a legitimate business need related to the issuing services. Validating entities must create a cardholder data flow diagram documenting where and how cardholder data moves through the system and/or is stored. PCI Rule 3.3 PCI Requirement 3.3 states that the 16-digit Primary Account Number (PAN) must be masked when displayed. The maximum that can be displayed are the first six and last four digits. The full PAN can only be displayed for those users whose roles include a legitimate business need to view the full PAN. This requirement applies to displays of PAN on screens, paper receipts and other printouts. PCI Rule 3.4 If the storage of PAN is unavoidable, that data must be rendered unreadable wherever it is stored. The PCI-DSS explicitly enumerates the acceptable methods for rendering this data unreadable. These methods include: Strong one-way hash functions of the entire PAN. Also called the “hashed index”, which displays only index data that point to records in the database where sensitive data actually reside. Also called the “hashed index”, which displays only index data that point to records in the database where sensitive data actually reside. Truncation. Removing a data segment, such as showing only the last four digits. Removing a data segment, such as showing only the last four digits. Index tokens with securely stored pads. An encryption algorithm that combines sensitive plain text data with a random key or “pad” that works only once. An algorithm that combines sensitive plain text data with a random key or “pad” that works only once. Strong cryptography. Cryptography is defined as the use of mathematical formulas to render plain text data unreadable. Rendering PAN data unreadable means that if an attacker were to get the data, it would be extremely difficult and time-consuming to decrypt the data. This means that data becomes essentially useless to attackers. PCI Rule 3.5 PCI rule 3.5 expands on the use of cryptography and requires that validating entities take the necessary steps to protect encryption keys from disclosure and misuse, and document those procedures. If an attacker gets ahold of the encryption keys, the data can be decrypted. This applies to data encrypting keys as well as key-encrypting-keys, to limit the possibility of attackers using any of these keys to decrypt data and expose cardholder data. Cryptographic keys must be stored in the fewest locations possible, with the least number of individuals having access. Validating entities must consider both external threats (hackers, physical threats) as well as internal threats from employees. PCI Rule 3.6 Key management processes for the use of cryptographic keys must be fully documented. This includes the secure generation, distribution and storage of cryptographic keys as well as policies that require key changes at the end of the cryptoperiod or if the integrity of the key has been weakened. This weakening could result from a team member with knowledge of the clear text encrypting key departing the organization, or if the keys are suspected to be compromised. Conclusion Developers of payments solutions must make sure they understand how and why their solution handles cardholder data (CHD), and they must also ensure PCI-DSS compliance for storing credit card numbers in a database. By keeping these tips top of mind, developers can help protect sensitive cardholder data from falling into the wrong hands. Contact us today to learn how we can help you with PCI compliance.
https://medium.com/@globalpaymentsintegrated/pci-rules-for-storing-credit-card-numbers-in-a-database-b9029ae833e4
['Global Payments Integrated']
2020-12-24 15:17:53.050000+00:00
['Payments', 'Pci Dss', 'Payment Processing', 'Fintech', 'Payments Technology']
527
2020 — Is Blockchain still a thing?
2020 — Is Blockchain still a thing? A review of 3 years blockchain development This is a story about Cryptocribs and Cryptokitties, Quantum Computers and Bitcoin Heists, Blood Diamonds and Mandalorian Bountyhunters. A lot has happened in the last 3 years. Photo by Thought Catalog on Unsplash The past years, blockchain was one of those hype topics which consumed and moved vast amounts of money but at the same time was on everybody’s bulls***t-bingo card. The technology itself already existed for over 10 years but only in the past 3 years it really took off, impacting massively the realms of technology and finance. Today the market of blockchain technologies is worth 3 billion USD. In financial assets you can find around 250 billion USD in cryptocurrencies. As an engineer I focus more on the technical development than on the financial aspects so even though the mentioned numbers are impressive, they are only a weak indicator for the maturity of the technology itself. There are these stories about putting the word “blockchain” into your company name and see investments going up through the roof without any evidence that blockchain technology would make sense in the given context or was even used. I even found a bachelor thesis which investigated “The effect of blockchain related corporate name changes on stock prices”. Number of bitcoin transactions per month (logarithmic scale) — https://en.wikipedia.org/wiki/History_of_bitcoin In 2018 Bitcoin reached the peak of its popularity and value and subsequently brought the peak of hyping the topic of blockchains. Entrepreneurs invented and reinvented everything with blockchain, including amazing gems like Cryptokitties. That year everything was blockchain. Once upon a time — in 2018 If you follow our work at AS Ideas Engineering, you know that in 2018 we researched and developed multiple prototypes based on blockchain so that we gain a better understanding of where it makes sense to invest in blockchain and where it is just a waste of resources. We built the “Blockchain Time Tracker”, the “Vacation Smart Contract Assistant”, the blockchain-based “Credibility Score” and we conducted a study about utilizing browser-based crypto mining as an alternative way of monetizing content on the web. If you want to learn more about our innovation process which spawns these prototypes, read this article here. None of these prototypes found their way into production. Our conclusion was, despite the vast potential of this technology, none of our businesses could directly benefit from it at this point in time. On the other hand our gained knowledge helped us and our units understanding the advantages and disadvantages in depth and all year long we educated our brands with what we learned. The talk which was requested the most was “Blockchain — Or the Slowest Database on the Internet” by Sebastian ‘Waschi’ Waschnick and Tarek Madany Mamlouk. We toured from event to event and the rooms were always packed with people who wanted to understand what the hype is all about. Blockchain — Or the slowest database on the Internet (talk in German) One of the highlights of our tour was the Awin Travel Day 2018 in Berlin, where Waschi and I were invited for presenting a case-study about blockchain in the travel industry. This was a very hot topic that year with automatic door-locks coupled to smart contracts at shared apartments and distributed instant-rewards programs of airlines. Cryptocribs planned to take over the private vacation-rental market of Airbnb by cutting out the middleman. Their whitepaper “CryptoCribs: A Peer-to-Peer Electronic Rental System” explains in detail how a blockchain based solution could automate the financial and reputational intermediation and therefore make this service cheap and more efficient. Waschi and Tarek presenting Blockchain in the Travel Industry at the Awin Travel Day 2018 Back in 2018 our audience asked about how the market will evolve in the following years but I never dared to make a prediction. This year, in 2020, I looked at those businesses I evaluated for the case-study to see if they actually took over the market or just disappeared. My findings were kind of disappointing: Nothing really happened. None of the evaluated services disappeared but also none of the services revolutionized the industry. Does that mean the blockchain revolution failed? 2019 — Did Google just kill the blockchain? Blockchain architecture provides solutions where you need a distributed, forgery-proof data-storage. The premise for this to work is math. Blockchains use symmetric encryption and hashing for authentication and verification. This works as long as our computers are unable to solve these mathematical challenges and so far this was a save bet. In October 2019, Google claimed that their quantum computer solved a complex computation in seconds which would be virtually impossible to be solved by even the strongest conventional super-computer. This development changes the premise of secure encryption and hashing and makes blockchains practically worthless. This is of course only partially true because Google’s scientific breakthrough in quantum mechanics is more a proof of concept and not the new standard for computation. Ahmed Banafa cites in an article that the quantum computer would require 1500 qubits to actually break the challenges of current state of the art encryption, while Google currently has 53. For now it seems that blockchains remain save and Google’s breakthrough in quantum computation does not really threaten the foundation of encryption-based IT security. Blockchain in 2020 web-search “Blockchain” on https://trends.google.com/ over time The blockchain hype is over. While 2 years ago everything was blockchain, today blockchain is what it is supposed to be: A clever solution for specific problems. And now it starts to get interesting because now we can focus on serious blockchain applications. Gartner agrees by naming “Practical Blockchain” one of the most important technology trends of 2020. One thing that did not change over time is the relevance of global cryptocurrency trade. Trading remains the most important and valuable application of blockchain technology. Global crypto exchange Coinbase is available in more than 100 countries and established itself as the standard exchange for the 25 most stable cryptocurrencies. In terms of market capitalization the world wide biggest player is crypto exchange Binance. Sadly Binance gained massive public attention when hackers stole Bitcoin worth around 40 million USD in 2019. So, even if the value of blockchain for business applications might be questionable for critics, the relevance in the financial sector is still obvious. And profits are gained not only in trading but also in mining. Riot Blockchain built an impressive enterprise whose core business is mining Bitcoin. They invest massively in mining-capable hardware and blockchain-related ventures and make millions (USD) in revenue. But let’s have a look at some successful implementations of blockchains in the real world by big players: Diamond producer De Beers uses blockchain for tracking individual diamonds from miner to retailer. Another high profile name is General Electric who implemented a blockchain for tracking construction and maintenance of engine parts of GE Aviation. There are also implementations for average consumer products as shown by Walmart, using blockchain for tracing items in their food supply ecosystem. A case study shows the time from tracing an item decreased from 7 days to 2.2 seconds. China-based Ant Financial released their Ant Duo-Chain Blockchain Platform which allows companies in supply chain channels instant payment for ordered goods. Figure Technologies from California incorporates their blockchain for handling their home equity line of credit, student loan refinancing and mortgages. Hyperledger: Modular blockchains for business Some of todays most popular blockchain applications arise from the Hyperledger project. The project started in 2015 and is hosted by the Linux Foundation. Supporters are influential companies like IBM, Intel, SAP, Oracle and Accenture. Their approach differs from other blockchain implementations by not having a digital currency associated. This confused me when I first read about it because originally a blockchain’s currency was the mandatory fuel which was needed to finance the blockchain’s nodes. Hyperledger approaches this challenge in a different way. Instead of providing a completely open, distributed ledger for pseudo-anonymous users, the Hyperledger project is an open source platform for business applications. You can use the modular architecture to create exactly the personalized blockchain infrastructure your business needs. The core of blockchain applications consists of so called smart contracts. In the Hyperledger context smart contracts are generally referred to as chaincodes. Currently chaincodes can be written in Go, Node.js or Java. This is where the developer includes custom business logic into the blockchain. The Hyperledger ecosystem consists of multiple frameworks and tools which can be orchestrated to the project’s specific needs. If you want to start writing chaincode, start by looking at IBM-contributed Hyperledger Fabric. Are you building mobile applications? Check out Hyperledger Iroha. Will your application represent a distributed ledger for supply chain management? Hyperledger Grid might be the framework for you. And don’t forget to use the Hyperledger Explorer to browse through and manage your data. A great overview of the Hyperledger project can be found here. Blockchain in Star Wars? If you are a Star Wars nerd like I am, the term “chaincode” might sound familiar to you. When I watched “The Mandalorian”, Disney’s tv-show about a mandalorian bounty hunter in the Star Wars universe, I heard the characters talk about “chaincodes”. Even though they never explicitly mention blockchains, it absolutely makes sense to understand them in the context of blockchain-based smart contracts! As an intergalactic bounty hunter, the user receives access to a chaincode which defines the payment of a certain amount of currency after the delivery of a fugitive was verified by the chaincode partner. So it is a reasonable assumption that interplanetary transactions in the Star Wars universe are managed via smart contracts in blockchains. As you can see, Star Wars is totally realistic and scientifically accurate in any way. (At least regarding chaincodes.) Photo by Michael Marais on Unsplash Where do we go from here? The hype is over and serious businesses are implementing blockchain-based solutions. Nevertheless Gartner still describes blockchain as immature for enterprise deployment but sees todays development as steps towards full implementation within the next 3 years: “Blockchain, which is already appearing in experimental and small-scope projects, will be fully scalable by 2023.” Tooling-ecosystems like Hyperledger demonstrate how businesses can easily assemble their custom blockchain based solutions today without the overhead of early blockchain architectures. The last years taught us how to build blockchains efficiently, the next years will bring maturity, scalability and interoperability.
https://medium.com/axel-springer-tech/2020-is-blockchain-still-a-thing-2a0429430c3e
['Tarek Madany Mamlouk']
2020-09-24 20:20:14.146000+00:00
['Gartner', 'Blockchain', 'Technology', 'Bitcoin', 'Innovation']
528
randstad inför greetlys kontaktfria besökssystem
Save Your AdminUTES with Greetly! Greet visitors, accept packages, and get more done. Start your FREE trial, no cc required, at https://www.greetly.com.
https://medium.com/@greetly/randstad-inf%C3%B6r-greetlys-kontaktfria-bes%C3%B6kssystem-ce3b1665b23c
['Greetly', 'Digital Receptionist', 'Digital Mailroom']
2020-12-20 01:57:27.969000+00:00
['Technology', 'Sweden', 'Office', 'Covid 19', 'Safety']
529
When Will The Mining Of Bitcoin End?
Photo by André François McKenzie on Unsplash Recently, the cryptocurrency Bitcoin reached 90 percent of its maximum offer. A study conducted by blockchain.com revealed that of the total distribution of 21 million Bitcoins, 18.89 million have already been mined and distributed to the market. The milestone comes almost 12 years after the first block, which consisted of 50 Bitcoins, was dug on January 9, 2009. Bitcoin founder Satoshi Nakamoto has set the value of Bitcoin at 21 million, making cryptocurrency scarce for controlling inflation that may result from the unlimited supply. Bitcoin is “mined” by miners who solve math problems to verify and validate the transaction block that takes place on its network. It is a process of adding new Bitcoins to the stream. After making a successful transaction set, the miner is rewarded with a Bitcoins block. It should be noted that every four years the value of Bitcoin mining is being halved. So, when Nakamoto created Bitcoin, the reward for securing a block of transactions was 50 Bitcoins. In 2012- 25 Bitcoins and dropped to 12.5 in 2016 and by the end of 2024 miners are expected to earn only 1.56 Bitcoins. This process is called splitting and will continue until the last Bitcoin is mined. It may seem that the world’s most popular cryptocurrency is on the verge of collapse, but from the halving schedules, it is predicted that the remaining 10 percent of bitcoins will continue until February 2140, according to blockchain.com. After reaching 21 million, Bitcoin will be very scarce and miners will rely on transaction funds, instead of block earnings. Miners will start to earn more from the transactions that take place on these blockchains than the mines themselves. It is noteworthy that Bitcoin is not just a cryptocurrency, but a blockchain network that processes transactions in a distributed platform. Therefore, has a lot more to use than just a crypto asset.
https://medium.com/@harsh8/when-will-the-mining-of-bitcoin-end-f13ba77805ad
['Harsh Sheth']
2021-12-30 07:13:58.415000+00:00
['Web3', 'Crytocurrency', 'Technology', 'Bi̇tcoi̇n', 'Bitcoin Mining']
530
20 Signs You Need to Log Off Social Media
Deactivation of your account might be the break you need. @prateekkatyal on unsplash Without delay, here are the 20 signs: 1. You feel anxious and stressed when comparing your life with other people online. Perhaps, you can’t help comparing your background and income with all these other people online. You feel like you lack innate value. It does not feel good when you put in several hours of work, and you find yourself with drastically different results from peers who may be younger, or inexperienced. You do not have the same financial, personal, and material growth seen in other peoples’ online pages. As a result, active comparison results in a low self-esteem. What can be done when you experience a low self-esteem? Consider two other components of self-esteem: (1) self-efficacy, and (2) self-respect. Your ability to efficiently resolve your problems and change your beliefs about your value can significantly improve a low self-esteem. 2. You have passion projects that get delayed because you are keeping up with content that is distracting you. Are you guilty of spending hours reading personal growth and financial mastery topics? Yet, you do not implement any of the tips given. You mainly browse and read. Guess what? It is time to apply the arsenal of tips you have amassed from hours of reading and contemplation. 3. You feel emotionally drained and triggered from reading all the posts/news shared with you. Do you follow many people who have drastically varying viewpoints? Do you allow your mouse and finger to click the ‘trending’ page on all your social media accounts? The noise is starting to get to you. How often do you find yourself spending more than one hour reading news articles that leave you exhausted and hopeless? Try this simple tip: before touching any app or logging into any site, set a timer for 5 minutes to read and browse. Once the alarm rings, you must set aside your phone or tablet. 4. You do not use social media to connect, rather to stalk and to belittle other people. In a time where there is no shortage of hate, division, and animosity, you decide to use your precious time to stalk and belittle strangers on the internet. Some people use social media to distract themselves from their own pain. This is a sign you may need to get counselling. Oddly, if you have not spoken to someone in years, you can tell other people intimate details about that person’s life. You know their job, their child’s name, and their recent whereabouts. Yet, you cannot recall details about people in your circle of friends/family. These are tell-tale signs you can either: (1) apply for a job that requires detective skills, or (2) you can log-off and start memorizing details that are useful to your life. 5. You always have your phone or tablet with you, even if you are in the washroom. Are you guilty of balancing your phone on the toilet paper holder? Maybe you keep your wireless headphones on all day to stay connected, even when you are doing number one or number two. It is no surprise that in this technology-age, we see some family members who walk around with a phone, as if it is an extension of their bodies. Here is a simple tip to prevent you from bringing your phone everywhere with you: leave it on one table and it must remain on that table for at least 24 hours before you can take it somewhere else. If you cannot complete this challenge, then you should consider the possibility of using airplane mode when it is possible! 6. It has become second nature to log-in automatically, and scroll endlessly for hours. You have tried to disengage from your social media accounts in the past by logging out, but now you find yourself logging in without thinking about it much. Your fingers naturally type and scroll without you pausing and thinking: ‘is this kind, necessary, and true, before I post, like, dislike, and share?’ Social media is designed to be highly addictive, so be careful of experiencing withdrawal symptoms from trying to quit. Perhaps, deactivating your account for a week might help you live a life that is engaged in other aspects outside your phone/tablet’s screen. @coolmilo on unsplash Another tip to consider: do not use face recognition to log-in, and do not have your accounts set up to automatically log you into your social media accounts. Try clearing your browsing history every couple of weeks, and clearing any saved passwords to maximize your safety. 7. You are concerned about the posts you share, and it has become a focal point in your daily life. Before eating, drinking, buying, constructing, sharing, making, and creating, you immediately ask yourself: “I wonder how many likes I can get from this post?” Your life revolves around content creation. N ot a single moment is taken to enjoy your meal. Not a single moment is taken to enjoy your life as it is, without concern of getting validation from another person. @igormiske on unsplash You are primarily concerned about how a post will get a specific reaction from your followers/friends. The simple joy of sharing a post has become your main source for a dopamine hit. 8. You cannot fall asleep without spending hours on your phone, and you use your phone as a sleep aid. Have you ever fallen asleep with your phone right next to you? Many people suffer from insomnia. There are studies that state the blue light emitted from your screen affects your body’s natural sleep cycle. When the body’s circadian rhythm is thrown out of its normal cycle, your body will negatively react. The body needs periods of complete darkness to heal. The eyes, in particular, will tell you immediately if you are needing rest. Instead of buying a lot of eye drops, maybe it is time to put your phone to the side at least two hours before bedtime. @dogukan on unsplash Consider the benefits of changing your settings to night-time mode (with a orange hue on your screen) or downloading f.lux. F.lux is a computer program that adjusts the screen color according to different times throughout the day, such that your eyes will feel less strained closer to bedtime. You can customize the settings so that your eyes feel less strained from looking at the computer screen. 9. Your body is sending you signals that it is in pain and/or sore from your poor posture and inconsideration. Have you ever laid in bed for an extended period, using your phone/tablet, that when you shift your body, you notice tingling in your legs/arms? Have you ever dropped your phone on your face because you can no longer prop up your phone? Has your gluteus maximus ever been sweaty and flat from sitting too long after extended internet browsing? How about your fingers cramping from too much instant messaging? Can you feel your eyes becoming incredibly strained and dry? If you answered ‘yes’ to any of these questions, it is most likely time to log off! If your body is starting to send signals that you need to stop, maybe it is time to start listening before it is time to make New Year’s resolutions. Develop more body awareness such that your shoulders are not rounded and tensed. @brucemars on unsplash Here are some quick tips to improve your body posture: Drop your shoulders away from your ears. Imagine a string extending from the top of your head pulling you upwards, so that you lengthen your body, rather than slouch. Look up and down, and left and right, so that your neck isn’t stiff. Apply the 20–20–20 rule. Spend a maximum of 20 minutes looking at your screen, and take 20 seconds to look away at an object at least 20 feet away from you. Work in intervals of 20 minutes to 1 hour. Remember to take 5 to 10 minutes to stretch in between each work period. Your legs and back will thank you. 10. You spend too much money on random advertisements shown on your social media. You know you need to stop, but you feel like it is impossible. Your bank account is slowly depleting from your overspending. You see many tailored advertisements that fit your personal taste. You can’t help but get that extra item to complete your overflowing collection of random bits-n-bobs. Pro-tip: do not link your credit card accounts to your social media accounts. 11. You must read every comment and respond to every interaction before ensuring your own personal hygiene and meals are taken care of first. You have neglected your own daily care. You are no longer functional and clean. Your family has made comments about the smells coming from your body and your room. Here is a quick tip: if you are hungry and/or dirty, do not even bother to finish reading this article, just go take care of yourself. 12. You use social media to prove yourself and record every milestone, whether it be your own or of a family member’s/friend’s. If it didn’t get posted on social media, did it even happen? Was there a graduation, birth, birthday, wedding, party, or job that you did not share? Additionally, if you did not share such events, did you truly enjoy every minute of it without concern of seeing posts of it online afterwards? If your self-image depends on the respect you get from other people seeing your accomplishments, or the accomplishments of your family/friends, then maybe it is time to reassess what is so great about having a manufactured self-image? 13. You do not care about the privacy of minor(s) who did not give you their consent before you posted their personal life events online. There is no problem sharing your life with other people. If you decide to take away that right from a minor who does not understand the concept of ‘giving consent,’ what are your reasons for doing it? @leorivas on unsplash Some people may argue they are keeping their posts limited to a close group of friends/family they trust. However, are you certain that everyone on your list of friends won’t sell the minor’s images to random people online? Should there be a required course that all new parents take about the dangers involved in social media? Maybe it is time to consider deleting those posts before your minor is old enough to ask you: “why did you do that to me without considering how it would affect my future and my feelings?” 14. You get into debates that make you lose your equanimity. You hit all the maximum letter counts on your social media post to prove your point. You are ready to bring out facts and figures to bring random strangers to their knees because you must be right. In doing so, you do not realize the vein popping out on your hand and perhaps, on the side of your forehead. Y ou have lost touch with the fact that you have become triggered by a collection of pixels. @umby on unsplash 15. You have nightmares from what you have read/seen on your social media accounts. Your subconscious mind has started collecting your impressions, and it is re-showing you images that are triggering you while you are asleep. 16. You are impressionable to the point that you cannot differentiate fake news from reliable resources. If you cannot determine what is fake and what is real, it is time to step away from your social media accounts. F abricated content will try to profit off of you. If your main news source has advertisements galore and clickbait links everywhere, you are probably relying on sources that are biased, and misinformed. Always question: Who wrote it? Do they have qualifications? What do they want from me (money, attention, views, promotion, traffic)? What are the sources they reference? Are the sources peer-reviewed or reliable? When was the article posted? Is it still relevant? Do experts have anything to say about the viewpoints expressed in the article? What are some conflicting beliefs/views/facts? Can I verify the information? Can I fact-check any scientific claims the article is making? Are there any recommendations or ideas made that are discriminatory, harmful, and hateful? 17. All your conversations are centered on what you read on social media. You are unable to hold a conversation about anything outside what you have learned on social media. It is time to become fascinated by other topics that may require more research and rigor. 18. You ask for personal advice through your social media accounts from people who are unverified, or self-proclaimed specialists who do not care about your well-being. There are plenty of elders and young folks who have been scammed. When hit with a life crisis or vulnerable moment, they immediately get online to share it and ask for advice. If you find yourself asking for guidance from strangers, make sure you use your discretion, and seek reliable counsel/mentorship, before applying any advice to your personal life. Consider asking yourself: Will this advice hurt me long-term (either financially, emotionally, and/or physically)? Am I acting from a place of anger, hatred, greed, insecurity, fear, or jealousy? Would I give this advice to someone I love? Can I trust this random person? Do they have my best interests at heart? Even if they do, who lives with the consequences of the decision if I end up taking the advice? Should I sleep on it or eat something before acting? Am I acting in haste? Am I using my logic? How will my family and friends be hurt if I take this advice? Have I consulted other reliable resources or a mentor before applying this advice to my personal life? 19. You notice a drastic change in your attention span. And it has affected your personal relationships. You can no longer focus for long periods without feeling incredibly agitated. You have trained yourself to shorten your attention span. If it does not emotionally trigger you and heighten your senses, then it is no longer worthy of your attention. People around you have commented on your inability to hear what they are truly saying. You are too focused on your phone to pay attention to people around you. @priscilladupreez on unsplash 20. When added up, you realize you are spending years of your life on social media that you cannot take back. Life is time. If we track our time, then we know how we are spending our life. When you track your usage, you realize you are spending several hours a day on social media. Perhaps, it is time to change your settings to allow only 10 minutes a day on social media, rather than several hours. Tip: You can look at your screen time, and change your app limits in your settings.
https://medium.com/@heka105/20-signs-you-need-to-log-off-social-media-acedd1552d64
['He Ka']
2020-12-21 12:03:15.058000+00:00
['Health', 'Technology', 'Mental Health', 'Time Management', 'Social Media']
531
The Art of Thinking in Other People’s Heads
The complaint that technology and media have distorted our culture, politics, our very understanding of reality is by now well-worn. A ruthless critic who regularly excoriated the press in his magazine The Torch, Kraus blamed German newspapers for the outbreak of World War I. He reserved a special hatred for the feuilleton section of the paper, which included, along with art, literature, and reviews, short impressionistic pieces about city life and culture. Baudelaire’s poems, forebear of an evasive genre scholar Andreas Huyssen has termed the “Modernist miniature,” were somewhat out of place in the feuilleton section, like stumbling on an art film while channel surfing. Whereas Baudelaire’s feuilletons tried to capture in language the experience of flânerie, the popular feuilleton transformed flânerie into a saleable commodity in its own right. In a note from the Arcades Project, which contains Benjamin’s unfinished notes on late nineteenth-century Paris, he writes of the three main ingredients of the French newspaper: “On information, advertising, and the feuilleton: the idler must be furnished with sensations, the merchant with customers, and the man in the street with a worldview.” Thus was born a style of writing one might call, repurposing a phrase of Bertolt Brecht’s, “The art of thinking in other people’s heads.”
https://medium.com/cogly/the-art-of-thinking-in-other-peoples-heads-858ecde02ffc
[]
2017-03-09 21:07:34.797000+00:00
['Culture', 'Thinking', 'Experience', 'Technology', 'Philosophy']
532
SHAME OF THE SCIENCE WORLD
SHAME OF THE SCIENCE WORLD Thalidomide is a type of medical research and drug, according to estimates from the 1950s-1960s. This research has been done in unethical ways. Doctors who wanted to find a vaccine against typhoid in Nazi Germany of the time infect innocent people in the concentration camp with typhoid to experiment. Then they experimented on them. Meanwhile, hundreds of people die in the gathering camp. Later, these doctors work at the pharmaceutical company Chemie Grünenthal, where they discover the Thalidomide component. The drug called contagion was produced with this substance, whose pharmacological effects were not fully known due to insufficient research. This medication can activate or deactivate the immune system so it can be used for treatment. Later, the anti-vomiting and calming effects of this drug were discovered. Does it have a calming effect? Calming was the most needed for wartime effect. Therefore, many people used this drug. The antiemetic-anti-vomiting-effect was also a nice effect for pregnant women. One in seven Americans used it regularly, and the demand for thalidomide in European markets was much higher. This drug is the only non-barbiturate drug ever available. Barbiturate is the name for sedative drugs that cause serious side effects. For this reason, Thalidomide, which was first used in Germany, attracted great attention and was marketed in 46 countries in 1960. It was sold just like aspirin. Australian obstetrician William McBride explained its antiemetic effect in 1960 and advised pregnant women to use this drug. Pregnant women also started taking this over-the-counter drug. But then things changed. The same obstetrician William McBride began associating thalidomide with a birth defect in 1961. A German newspaper said 161 babies were badly affected by thalidomide. People who used it started to experience muscle aches, weakness, and peripheral nervous system diseases. In late 1962, as miscarriages and birth defects increased, it began to be banned in all countries sold. But it was too late now. As we said earlier, thalidomide was widely available without a prescription. Meanwhile, the company that manufactured the drug has denied the reports that this drug is harmful. The company lied to all the doctors who said that this Thalidomide -containing drug was harmful. While the minor negligence caused the health of thousands of lives and even the existence of this company, this company was still saying that the medicine was healthy for the money. After an experiment on chickens, it is now revealed that the drug Thalidomide adversely affects bone development, preventing the embryo from forming blood vessels while developing. After that, the company resisted a little more, but finally, its production completely stopped. Finally, he also does not allow the state that a simple drug Thalidomide Disasterhas two countries, the US and Turkey. Turkey is the only case where the non-visible country because in America, Dr. While Frances Kelsey does not allow legal permission, 2000 drugs are used under the name of my introduction sample. Therefore, many cases are observed in America. The order that the veterinarian in Turkey. Prof. Dr. Sureyya Tahsin Aygun in accordance with the Turkish Ministry of Health warns Turkey’s laws in this regard and these drugs are not used in any way. Therefore, due to thalidomide 90.000 deaths in the world and more than 100.000 birth defects found in Turkey have not been demonstrated in this case.
https://medium.com/predict/shame-of-the-science-world-322d696787d1
['Recep Suluker']
2020-12-27 00:54:46.743000+00:00
['Science', 'Future', 'Technology', 'Disaster', 'Disability']
533
Solid Copper Raspberry Pi 4 Case Costs $250
The 785 gram case looks fantastic and keeps the Pi cool even when overclocked to 2.1GHz. By Matthew Humphries The Raspberry Pi offers a very cheap way to get a desktop computer—unless, of course, you decide to place it inside this gorgeous solid-copper case. As Tom’s Hardware reports, the Solid Copper Maker Block Case for Raspberry Pi 4 from Desalvo Systems costs an eye-watering $249.95. In return, your case can not only look amazing, but it can also keep your Pi running cool even at the highest of overclocks.
https://medium.com/pcmag-access/solid-copper-raspberry-pi-4-case-costs-250-5086ca4ccf7f
[]
2020-11-27 21:20:35.453000+00:00
['Raspberry Pi', 'Technology', 'Computing', 'DIY', 'Gadgets']
534
OneRagtime at VivaTech 2021
On June 14th, our team OneRagtime — Stéphanie Hospital, CEO and Founder and César Chanut, Oscar Péribère and Pauline d’Arthuys — attended the 5th edition of Vivatech’s annual events dedicated to startup growth and business change. What a blast to be all together again! VivaTech was created only in 2016 yet it has become one of Europe’s leading tech gatherings in France as the French Tech and European Tech are just booming. VivaTech adapted this year to a hybrid model allowing 26,000 participants to join in-person and 114,000 online. The ability to be in-person again was a truly appreciated experience that granted engaging and convivial interactions with the people there. It was super enriching meeting investors, speakers, entrepreneurs and startups. Stéphanie Hospital (OneRagtime) and Michel Combes (Softbank) During the three days, VivaTech hosted many incredible speakers, including Michel Combes and Marcelo Claure from Softbank. They were invited to exchange about Softbank and its amazing journey in Venture Capital. During the interview, Michel talked about the recent Softbank’s fundraise in Jellysmack, now a unicorn and the first investment ever of the OneRagtime portfolio! Jellysmack’s focus is on the creator’s economy. We are delighted that Softbank believes in Jellysmack’s vision as we did when we invested at the inception of the company. We share Softbank investment thesis: “Great entrepreneurs disrupting traditional sectors with Artificial Intelligence.” OneRagtime’s Founder and CEO, Stéphanie Hospital, engaged as a speaker in a discussion alongside Michael Rickwood, Chief Coaching Officer at Ideas on Stage, about her story in the tech sector. “All the projects and technology that we work on today really have the potential to change people’s lives, and that’s why we do it”, said Stéphanie while explaining what the venture capital fund she founded, OneRagtime, does. Many of OneRagtime’s portfolio companies were at Vivatech : From left to right: Raphaël Jabol (Avostart), Stéphanie Hospital (OneRagtime) and Jean-Denis Garo (Golem.ai) Golem.ai with Jean-Denis Garo : a B2B deep tech startup that develops and distributes solutions for automation & business support through explainable & frugal AI language analysis. Part of La Poste booth (one of the biggest exhibitors at the event and a platinum partner of Viva Technology). Avostart with Raphaël Jabol : another B2B startup offering a platform for all types of legal assistance. Also, part of La Poste Booth ExactCure with Frédéric Dayan, Sylvain Benito and Fabien Astic : a B2B and B2C startup, an app and online platform helping patients with the management of medicine intake. Part of the Huawei booth Also present this year: Hoomano with Xavier Basset and Cyril Maitrejean : it is an Artificial Intelligence software enabling social interaction with machines. Make.Org with Axel Dauchez : it is an independent and European civic tech aimed at reactivating democracies by engaging citizens in collaborative transformative actions. They participated in the conference “Education: can tech make learning better? Fun fact, Axel launched Vivatech when he was at Publicis (thanks Axel)! And last but not least, our own Taig Khris announced this game-changing partnership with Microsoft Teams. Taig Khris is the CEO of Onoff, the first telecom operator in the cloud, allowing users to manage multiple numbers and to switch them “on” or “off” with one click and Onoff has recently partnered with Microsoft Teams to have their numbers integrated into the platform. Exciting news!! Viva Technology was a fantastic opportunity for startups to connect with their customers, large corporations, users and audiences. Thank you to Julie Ranty — manager director at VivaTech who let us have the opportunity to be a part of the event, July Avedissian who guided us during the experience, Marin Soullier who is in charge of investor relations for organising and making this event memorable and of course Pierre Louette, CEO of Les Echos and Maurice Levy for hosting us and making this event so special and unique. by Capucine Verbrugge
https://medium.com/@oneragtime/oneragtime-at-vivatech-2021-90726cf119
[]
2021-07-06 12:00:28.672000+00:00
['Vivatech', 'Investors', 'Entrepreneurship', 'Technology', 'Venture Capital']
535
Is Your Brand Ready for the Talent-Stack Wars?
Is every company really just a technology company? Most companies require proficiency in technology in order to reach customers where they are, fulfill orders, and service the relationship during and after the sale. Is every company a financial services firm? Are Ford or GM auto manufacturers or finance companies? Based on the new car ads and 7-year financing, I’d say they’re selling cars and debt. They probably earn more money on debt, but I could be wrong. Is every brand a content company? Ask Amazon, Apple, and AT&T, and we haven’t even moved to “B” companies. Is every professional a brand? It’s always been the case, but it used to be all about the resume. Not anymore. Three Reasons to Nurture a Personal Brand Everything is Project-Based In recent years I met professionals who have been at a single company for more than 25 years. People talk about unicorns in the world of technology investments. I thought the days of pensions and gold watches at retirement were long gone, so these “lifers” are the real unicorns in our current employment environment. What I have learned, and this may be a Captain Obvious statement, is that we’re all working on a project whether it’s as a full-time employee, a vendor, contractor, intern, or volunteer. No matter your current situation, you’re working on yourself, your talent stack, and networking for the next opportunity 2. People will Compete Based on Talent Stacks Recently I have paid more attention to the non-Dilbert work from Scott Adams, and he talks a lot about developing a talent stack. I’ve worked almost exclusively in technology, so the concept of a software stack is a useful frame for how people acquire skills, build their value in the market, and how they can sell themselves. Google views technical certification as equal or better than a four-year, college degree. The company offers six-month, online training courses that arm people to compete for lucrative, in-demand work opportunities. Software developers and other technology savants have known that world-class skills beat a degree any day, but the rest of the world is catching up. If the University of “X” on your resume doesn’t have the same cache as it once did, do you stand out because of the skills you demonstrate (and publicize) in the world or your educational affiliation? It’s the skills, and then it’s about your brand. One of the lessons from the talent or skill stack concept is that you can acquire almost any skill through the intentional investment of time, effort, and sometimes money. Also important to note that those skills can be completely unrelated to your current work. 3. Your Job & Network will be Distributed and Remote Forever Some companies will support a hybrid work model that offers remote and in-person work options, and some companies will go completely remote. Your current work environment will remain remote, and the watercooler, lunchroom, and hallway conversations are going away too. Companies will hire the best person globally for every role Communication and social media skills will become even more critical to your success. Since remote work is here to stay in some form or fashion, it’s advisable to build your brand. Use platforms like LinkedIn to connect and engage if only through regular posts that showcase the full range of your talent stack (including soft skills). If TikTok or Twitch are the best platforms for your talent stack and target market, let your freak flag fly. Networks and platforms will evolve as their popularity ebbs and flows along with behavior patterns. The core of your work is your brand, message, and value. Hustle & Brand like a Star Here’s the thing. If you’re uncomfortable with the idea of building a brand, think of it as a skill you can develop as part of your future, talent stack. If it’s not something you’ve done in the past, why not learn lessons from people outside of your industry? You know, like celebrities. Yes, I know they talk too much about politics, but they are independent contractors and personal brands. They generate ideas, content, and controversy as a way to stay in the conversation. After all, who doesn’t want to be The Rock? Follow me on LinkedIn or Twitter.
https://medium.com/@david-yates-fontaine/is-your-brand-ready-for-the-talent-stack-wars-4ba0452c8bfc
['David Fontaine']
2020-12-15 14:55:19.131000+00:00
['Personal Branding', 'Personal Development', 'Future Of Work', 'Technology Trends', 'Content Strategy']
536
Death by a thousand spreadsheets
Death by a thousand spreadsheets How product management has fallen behind in the new era of product excellence In 1878, the world was introduced to something that would forever change how people would communicate. That product was the telephone. Despite all its promise, reaching the masses took a lot of time. In fact, it took almost 80 years for the telephone to reach 100 million users. But, the world has a lot changed since 1878. While it took the telephone nearly a century to reach 100 million users, it took the mobile phone less than 20 to hit the same mark. After its launch, it took Facebook only five years to do the same. And when Candy Crush launched in 2012, it took them a mere 15 months to reach that benchmark of 100 million users. What’s my point? Today, the pace of the market has accelerated dramatically. While telephone companies had the better part of a century to secure their position as the leading telecom provider, today’s competitive climate means companies often have just a few years to dominate their industry categories. Nowadays, new rivals can come out of nowhere, and if they offer superior features, UX, or succeed in building stronger customer relationships, your customers can (and will) defect to them. (HipChat vs. Slack, anyone?) Undoubtedly, the speed at which we innovate is crucial and customers want nothing but the best from your products. These days they accept nothing short of excellence, and why shouldn’t they? In this climate of rapid change, heightened competition and customer expectations, the role of the product manager — your role — has to keep up with the times. The issue here is that it hasn’t. Product Management Is Stuck In The Past Technology has changed a lot in the past few decades, and with it people have changed as well. Take your colleagues in sales, marketing, and support. They’ve enjoyed a host of new tools that help them respond to this new fast-paced, customer-centric world. But product managers? Well, we haven’t been so lucky. Take a look at what I mean: Marketing teams have Marketo Sales teams have Salesforce Support teams have Zendesk Engineering teams have JIRA And product managers have… spreadsheets That’s right — essentially the same spreadsheets we had decades ago. The truth is this: Our outdated tools and processes are keeping us from excelling in this new fast-moving, customer-centric world. But while you and I are stuck with our same ol’ spreadsheets, our competitors have found a way to systematically capture user feedback from support, sales, and marketing, and use it to make better prioritization decisions. While the loudest voices in the company and customer base are pressuring us to build their favorite feature ideas, other teams are deciding what to build based on criteria that support a cohesive strategy. And while we’re struggling to win buy-in for our roadmaps, other teams are rallying their entire organizations around a common vision for where their products are headed. We’ve been left in the virtual dust thanks to the outdated, old-school methods we’ve been left with. Perhaps, like us, you’ve said “there has to be a better way.” After all, there are are plenty of companies like Zendesk and Invision who seem to release updates and products that continue to knock it out of the park. So what are they doing differently? How The Best Teams Do Product Management In The Modern World At productboard, we make it our job to know what the best teams in the product management space are doing. What we’ve discovered is they all share these three core things: Deep customer understanding Clear product strategy Buy-in for the roadmap When companies and product teams have this framework at top of mind in everything they do, it paves the way to building truly excellent products. Let’s take a closer look at each in turn. Deep Customer Understanding The entire product team has a deep understanding of what users really need. We’ve all fallen into the trap of assuming we knew exactly what users needed before an earth-shattering revelation showed us just how little we really understood. As organizations evolve, they develop better systems for collecting quality user feedback and validating their understanding of user needs. They develop pipelines to route insights from sales calls and customer support tickets to the product team for review. In a well-meaning attempt to understand customers, product managers, designers, or dedicated researchers interview users about their needs before feature prioritization even takes place. And early prototypes of a feature are shared with users to collect feedback sooner. But, all too often, critical user insights collected from these activities remain siloed in individual teammates’ inboxes, Evernotes, Google docs, CRMs, or support systems. The problem many teams have discovered is that unless this information is centralized, made available for all teams to collaborate around, and formatted in a way that makes it actionable, we miss the opportunity to put these insights to use. The top product teams understand this, so they have relevant user research and feedback on hand for every prioritization decision. Product managers, designers, and developers all know what users really need (rather than just building whatever they happen to request). That deep empathy for users lends a real sense of purpose to their work. It also gives them an edge in delivering features that solve user problems in a particularly delightful way. Clear Product Strategy The whole product team is aligned around key strategic objectives. There will always be new things your product could do, or things your product could do better. And there will always be some colleagues and customers who are especially passionate about what those things should be. (Sometimes that colleague is even your CEO…) But as product managers, it’s our responsibility to gain the perspective necessary to see which of those fit into a coherent strategy for how to sustain the success of our product (and business) over the long-term. Product strategy can seem abstract and intimidating. Really it’s all about deciding on areas of focus that will guide the feature prioritization process. Before moving forward building various features, take a step back to consider your overall strategy. What’s most important for your business right now? Growth: How can you ensure growth that will sustain your business? How will you compel new users to adopt your product? How will you help them gain ongoing value from it? How can you ensure growth that will sustain your business? How will you compel new users to adopt your product? How will you help them gain ongoing value from it? Competitive differentiation: Are you in an especially competitive industry? How can you differentiate yourself, or adapt your solution to create a new market category altogether? Are you in an especially competitive industry? How can you differentiate yourself, or adapt your solution to create a new market category altogether? Regulatory compliance: Do you have to contend with a strict regulatory environment? How will you keep your product in compliance? Will you work to support the more stringent requirements of enterprise customers or target SMBs instead? Do you have to contend with a strict regulatory environment? How will you keep your product in compliance? Will you work to support the more stringent requirements of enterprise customers or target SMBs instead? Security/Reliability: What investments will you make to keep your solution secure and reliable? Top-performing product teams have moved beyond one-off feature prioritization to focusing on clusters of complementary features, or initiatives, that all support some common business objective within your product strategy. Meanwhile, the best product managers ensure everyone on the team understands these objectives and why they’ll drive the business forward. Whether the goal is to drive user adoption (as evidenced by higher trial conversion rates) or improve platform reliability (as evidenced by a 5% decrease in bug reports), everyone working on the product should know why their work is important.
https://medium.com/productboard/still-prioritizing-features-in-a-spreadsheet-heres-why-that-should-scare-you-1611d61b95a1
[]
2018-06-06 21:16:28.463000+00:00
['Startup', 'Ideas', 'Product Management', 'Technology', 'UX']
537
This Transparent Edge-Lit Seven-Segment Display Elicits Nostalgia
Small OLED and LCD screens are cheap enough now that you can use them in virtually any project, but for decades seven-segment displays were the go-to solution when you needed to display numerical digits without spending a lot of money. As such, many makers have a nostalgic fondness for their appearance. You can, of course, still purchase them and use them in your projects. Or, you can follow Debra’s lead and pay homage to those classic displays with an edge-lit seven-segment display. At first glance, and without a sense of scale, Debra’s design looks like a traditional seven-segment display. Each digit is broken up into the requisite seven line segments, and there are six digits to make it usable as a clock. But in actuality, this display is much larger. It’s also completely transparent, and the digits — or even individual segments — can be shown in any color. That’s because those segments are actually edge-lit using a strip of Adafruit NeoPixel-style individually-addressable RGB LEDs. Those shine through clear acrylic to illuminate the necessary segments. If you want to build your own edge-lit seven-segment display, Debra has provided very detailed instructions on how to do so. You will need access to a precise laser cutter, because the acrylic parts all fit with very tight tolerances. After cutting all of your parts, you can insert the LED strips into their mounting locations. The opposite side of each segment is covered in non-conductive foil tape in order to eliminate bleed over between segments. Debra used an Adafruit ItsyBitsy M4 Express to control the LEDs, but you can use just about any microcontroller development board. She has also provided the code to make it all work.
https://medium.com/@cameroncoward/this-transparent-edge-lit-seven-segment-display-elicits-nostalgia-7845c167cb73
['Cameron Coward']
2019-06-10 17:50:44.268000+00:00
['Technology', 'DIY', 'Makers', 'Clock', 'Arduino']
538
Smart Contracts: Threat or opportunity?
A “smart contract” is simply a program that runs on the Ethereum blockchain. It’s a collection of code (its functions) and data (its state) that resides at a specific address on the Ethereum blockchain. Photo by vjkombajn from pixabay A. Terminology smart contract: is a computer program or a transaction protocol which is intended to automatically execute, control or document legally relevant events and actions according to the terms of a contract or an agreement. The objectives of smart contracts are the reduction of need in trusted intermediators, arbitrations and enforcement costs, fraud losses, as well as the reduction of malicious and accidental exceptions. White paper: is a report or guide that informs readers concisely about a complex issue and presents the issuing body’s philosophy on the matter. It is meant to help readers understand an issue, solve a problem, or make a decision. Blockchain: is a growing list of records, called blocks, that are linked together using cryptography. Each block contains a cryptographic hash of the previous block, a timestamp, and transaction data. The timestamp proves that the transaction data existed when the block was published in order to get into its hash. As blocks each contain information about the block previous to it, they form a chain, with each additional block reinforcing the ones before it. Therefore, blockchains are resistant to modification of their data because once recorded, the data in any given block cannot be altered retroactively without altering all subsequent block Ethereum: is a decentralized, open-source blockchain with smart contract functionality. Ether is the native cryptocurrency of the platform; among cryptocurrencies, it is second only to Bitcoin in market capitalization. B. What is Smart Contracts? The phrase and concept of “smart contracts” was developed by Szabo (computer scientist, lawyer and cryptographer) with the goal of bringing what he calls the “highly evolved” practices of contract law and practice to the design of electronic commerce protocols between strangers on the Internet Smart contracts are a type of Ethereum account (if you want to know more you can read their white paper here. This means they have a balance and they can send transactions over the network. However they’re not controlled by a user, instead they are deployed to the network and run as programmed. User accounts can then interact with a smart contract by submitting transactions that execute a function defined on the smart contract. Smart contracts can define rules, like a regular contract, and automatically enforce them via the code. Smart contracts cannot be deleted by default, and interactions with them are irreversible. C. Smart Contract Architecture Within a smart contract, there can be as many stipulations as needed to satisfy the participants that the task will be completed satisfactorily. To establish the terms, participants must determine how transactions and their data are represented on the blockchain, agree on the if/when…then…” rules that govern those transactions, explore all possible exceptions, and define a framework for resolving disputes. Smart Contract Schematic Then the smart contract can be programmed by a developer — although increasingly, organizations that use blockchain for business provide templates, web interfaces, and other online tools to simplify structuring smart contracts. The basic architecture of the EVM (ethereum virtual machine) that runs smart contracts is that all calls to the contract are executed as a transaction where the ether required for a contract method executed is transferred from the calling account address to the contract account address. The contract code resides on the contract address on the blockchain and expects the calls to come in as transactions carrying the method parameter data along with the transaction as “input”. To enable a standard format for all clients, the method name, and parameters need to be marshaled in a recommended format. D. Pros and Cons Why we should use smart contracts? 1-Speed, efficiency and accuracy Once a condition is met, the contract is executed immediately. Because smart contracts are digital and automated, there’s no paperwork to process and no time spent reconciling errors that often result from manually filling in documents. 2- Trust and transparency Because there’s no third party involved, and because encrypted records of transactions are shared across participants, there’s no need to question whether information has been altered for personal benefit. 3- Security Blockchain transaction records are encrypted, which makes them very hard to hack. Moreover, because each record is connected to the previous and subsequent records on a distributed ledger, hackers would have to alter the entire chain to change a single record. 4-Savings Smart contracts remove the need for intermediaries to handle transactions and, by extension, their associated time delays and fees. Why we should AVOID using smart contract?!!! Smart contracts introduce an additional risk that does not exist in most text-based contractual relationships — the possibility that the contract will be hacked or that the code or protocol simply contains an unintended programming error. Given the relative security of blockchains, these concepts are closely aligned; namely, most “hacks” associated with blockchain technology are really exploitation of an unintended coding error. As with many bugs in computer code, these errors are not glaring, but rather become obvious only once they have been exploited. For example, in 2017 an attacker was able to drain several multi-signature wallets offered by Parity of $31 million in ether. Multi-signature wallets add a layer of security because they require more than one private key to access the wallet. However, in the Parity attack, the attacker was able to exploit a flaw in the Parity code by re-initializing the smart contract and making himself or herself the sole owner of the multi-signature wallets. Parties to a smart contract will need to consider how risk and liability for unintended coding errors and resulting exploitation are allocated between the parties, and possibly with any third party developers or insurers of the smart contract. A blockchain-based smart contract is visible to all users of said blockchain. However, this leads to a situation where bugs, including security holes, are visible to all yet may not be quickly fixed. Such an attack, difficult to fix quickly, was successfully executed on The DAO in June 2016, draining approximately US$50 million worth of Ether at the time, while developers attempted to come to a solution that would gain consensus. The DAO program had a time delay in place before the hacker could remove the funds; a hard fork of the Ethereum software was done to claw back the funds from the attacker before the time limit expired. Other high-profile attacks include the Parity multi-signature wallet attacks, and an integer underflow/overflow attack (2018), totaling over US$184 million. Issues in Ethereum smart contracts, in particular, include ambiguities and easy-but-insecure constructs in its contract language Solidity, compiler bugs, Ethereum Virtual Machine bugs, attacks on the blockchain network, the immutability of bugs and that there is no central source documenting known vulnerabilities, attacks and problematic constructs. References: https://www.ibm.com/topics/smart-contracts https://corpgov.law.harvard.edu/2018/05/26/an-introduction-to-smart-contracts-and-their-potential-and-inherent-limitations/ https://ethereum.org/en/developers/docs/smart-contracts/ https://en.wikipedia.org/wiki/Smart_contract https://blockgeeks.com/ethereum-smart-contract-clients/ https://web.archive.org/web/20170730133911/http://iqdupont.com/assets/documents/DUPONT%2D2017%2DPreprint%2DAlgorithmic%2DGovernance.pdf https://www.freecodecamp.org/news/a-hacker-stole-31m-of-ether-how-it-happened-and-what-it-means-for-ethereum-9e5dc29e33ce/
https://medium.com/@omid-haghighatgoo/smart-contracts-threat-or-opportunity-ffa6396a981e
['Omid Haghighatgoo']
2021-09-08 08:44:16.543000+00:00
['Blockchain Technology', 'Blockchain', 'Smart Contracts', 'Bitcoin', 'Ethereum']
539
Data Science’s Most Misunderstood Hero
Data Science’s Most Misunderstood Hero Why treating analytics like a second-class citizen will hurt you This article is an extended 2-in-1 remix of my HBR article and TDS article about analysts. Be careful which skills you put on a pedestal, since the effects of unwise choices can be devastating. In addition to mismanaged teams and unnecessary hires, you’ll see the real heroes quitting or re-educating themselves to fit your incentives du jour. A prime example of this phenomenon is in analytics. Shopping for the trophy hire The top trophy hire in data science is elusive, and it’s no surprise: “full-stack” data scientist means mastery of machine learning, statistics, and analytics. When teams can’t get their hands on a three-in-one polymath, they set their sights on luring the most impressive prize among the single-origin specialists. Who gets the pedestal? Today’s fashion in data science favors flashy sophistication with a dash of sci-fi, making AI and machine learning darlings of the hiring circuit. Alternative challengers for the alpha spot come from statistics, thanks to a century-long reputation for rigor and mathematical superiority. What about analysts? Analytics as a second-class citizen If your primary skill is analytics (or data-mining or business intelligence), chances are that your self-confidence takes a beating when your aforementioned compatriots strut past you and the job market drops not-so-subtle hints about leveling up your skills to join them. Good analysts are a prerequisite for effectiveness in your data endeavors. It’s dangerous to have them quit on you, but that’s exactly what they’ll do if you under-appreciate them. What the uninitiated rarely grasp is that the three professions under the data science umbrella are completely different from one another. They may use the same equations, but that’s where the similarity ends. Far from being a sloppy version of other data science breeds, good analysts are a prerequisite for effectiveness in your data endeavors. It’s dangerous to have them quit on you, but that’s exactly what they’ll do if you under-appreciate them. Alike in dignity Instead of asking an analyst to develop their statistics or machine learning skills, consider encouraging them to seek the heights of their own discipline first. Data science is the kind of beast where excellence in one area beats mediocrity in two. Each of the three data science disciplines has its own excellence. Statisticians bring rigor, ML engineers bring performance, and analysts bring speed. At peak expertise, all three are equally pedestal-worthy but they provide very different services. To understand the subtleties, let’s examine what it means to be truly excellent in each of the data science disciplines, what value they bring, and which personality traits are required to survive each job. Excellence in statistics: rigor As specialists in coming to conclusions beyond your data safely, statisticians are your best protection against fooling yourself in an uncertain world. To them, inferring something sloppily is a greater sin than leaving your mind a blank slate, so expect a good statistician to put the brakes on your exuberance. Constantly on tiptoe, they care deeply about whether the methods applied are right for the problem and they agonize over which inferences are valid from the information at hand. What most people don’t realize is that statisticians are essentially epistemologists. Since there’s no magic that makes certainty out of uncertainty, their role is not to produce Truth but rather a sensible integration of palatable assumptions with available information. The result? A perspective that helps leaders make important decisions in a risk-controlled manner. Unsurprisingly, many statisticians react with vitriol toward “upstarts” who learn the equations without absorbing any of the philosophy. If dealing with statisticians seems exhausting, here’s a quick fix: don’t come to any conclusions beyond your data and you won’t need their services. (Easier said than done, right? Especially if you want to make an important launch decision.) Excellence in machine learning: performance You might be an applied machine learning / AI engineer if your response to “I bet you couldn’t build a model that passes testing at 99.99999% accuracy” is “Watch me.” With the coding chops build prototypes and production systems that work and the stubborn resilience to fail every hour for several years if that’s what it takes, machine learning specialists know that they won’t find the perfect solution in a textbook. Instead, they’ll be engaged in a marathon of trial-and-error. Having great intuition for how long it’ll take them to try each new option is a huge plus and is more valuable than an intimate knowledge of how the algorithms work (though it’s nice to have both). “I’ll make it work.” -Engineer The result? A system that automates a tricky task well enough to pass your statistician’s strict testing bar and deliver the audacious performance a business leader demanded. Performance means more than clearing a metric — it also means reliable, scalable, and easy-to-maintain models that perform well in production. Engineering excellence is a must. Wide versus deep What the previous two roles have in common is that they both provide high-effort solutions to specific problems. If the problems they tackle aren’t worth solving, you end up wasting their time and your money. A frequent lament among business leaders is, “Our data science group is useless.” and the problem usually lies in an absence of analytics expertise. Statisticians and machine learning engineers are narrow-and-deep (the shape of a rabbit hole, incidentally) workers, so it’s really important to point them at problems that deserve the effort. If your experts are carefully solving the wrong problems, of course your investment in data science suffers low returns. To ensure that you can make good use of narrow-and-deep experts, you either need to be sure you already have the right problem or you need a wide-and-shallow approach to finding one. Excellence in analytics: speed The best analysts are lightning-fast coders who can surf vast datasets quickly, encountering and surfacing potential insights faster than the those other specialists can say “whiteboard.” Their semi-sloppy coding style baffles traditional software engineers… until it leaves them in the dust. Speed is the highest virtue, closely followed by the trait of not snoozing past potentially useful gems. A mastery of visual presentation of information helps with speed bottlenecks on the brain side: beautiful and effective plots allow the mind to extract information faster, which pays off in time-to-potential-insights. Where statisticians and ML folk are slow, analysts are a whirlwind of inspiration for decision-makers and other data science colleagues. The result: the business gets a finger on its pulse and eyes on previously-unknown unknowns. This generates the inspiration that helps decision-makers select valuable quests to send statisticians and ML engineers on, saving them from mathematically-impressive excavations of useless rabbit holes. Sloppy nonsense or stellar storytelling? “But,” object the statisticians, “most of their so-called insights are nonsense.” By that they mean the results of their exploration may reflect only noise. Perhaps, but there’s more to the story. Analysts are data storytellers. Their mandate is to summarize interesting facts and be careful to point out that any poetic inspiration that comes along for the ride is not to be taken seriously without a statistical follow-up. Buyer beware: there are many data charlatans out there posing as data scientists. There’s no magic that makes certainty out of uncertainty. Good analysts have unwavering respect for the one golden rule of their profession: do not come to conclusions beyond the data (and prevent your audience from doing it too). Unfortunately, relatively few analysts are the real deal — buyer beware: there are many data charlatans out there posing as data scientists. These peddle nonsense, leaping beyond the data in undisciplined ways to “support” decisions based on wishful thinking. If your ethical standards are lax, perhaps you’d keep these snake oil salesmen around and house them in the marketing dark arts part of your business. Personally, I’d prefer not to. Good analysts have unwavering respect for the one golden rule of their profession: do not come to conclusions beyond the data. As long as analysts stick to the facts (“This is what is here.” But what does it mean? “Only: This is what is here.”) and don’t take themselves too seriously, the worst crime they could commit is wasting someone’s time when they run it by them. Out of respect for their golden rule, good analysts use softened, hedging language (for example, not “we conclude” but “we are inspired to wonder”) and discourage leader overconfidence by emphasizing a multitude of possible interpretations for every insight. While statistical skills are required to test hypotheses, analysts are your best bet for coming up with those hypotheses in the first place. For instance, they might say something like “It’s only a correlation, but I suspect it could be driven by …” and then explain why they think that. This takes strong intuition about what might be going on beyond the data, and the communication skills to convey the options to the decision-maker, who typically calls the shots on which hypotheses (of many) are important enough to warrant a statistician’s effort. As analysts mature, they’ll begin to get the hang of judging what’s important in addition to what’s interesting, allowing decision-makers to step away from the middleman role. Of the three breeds, analysts are the most likely heirs to the decision throne. Because subject matter expertise goes a long way towards helping you spot interesting patterns in your data faster, the best analysts are serious about familiarizing themselves with the domain. Failure to do so is a red flag. As their curiosity pushes them to develop a sense for the business, expect their output to shift from a jumble of false alarms to a sensibly-curated set of insights that decision-makers are more likely to care about. To avoid wasted time, analysts should lay out the story they’re tempted to tell and poke it from several angles with follow-up investigations to see if it holds water before bringing it to decision-makers. If a decision-maker is in danger of being driven to take an important action based on an inspiring story, that is the Bat-Signal for the statisticians to swoop in and check (in new data, of course) that the action is a wise choice in light of assumptions the decision-maker is willing to live with and their appetite for risk. The analyst-statistician hybrid For analysts sticking to the facts, there’s no such thing as wrong, there’s only slow. Adding statistical expertise to “do things correctly” misses the point in an important way, especially because there’s a very important filter between exploratory data analytics and statistical rigor: the decision-maker. Someone with decision responsibility has to sign off on the business impact of pursuing the analyst’s insight being worth a high-effort expert’s time. Unless the analyst-statistician hybrid is also a skilled decision-maker and business leader, their skillset forms a sandwich with a chasm in the middle. An analyst who bridges that gap, however, is worth their weight in gold. Treasure them! Analytics for machine learning and AI Machine learning specialists put a bunch of potential data inputs through algorithms, tweak the settings, and keep iterating until the right outputs are produced. While it may sound like there’s no role for analytics here, in practice a business often has far too many potential ingredients to shove into the blender all at once. Your analyst is the sprinter; their ability to quickly help you see and summarize what-is-here is a superpower for your process. One way to filter down to a promising set to try is domain expertise — ask a human with opinions about how things might work. Another way is through analytics. To use the analogy of cooking, the machine learning engineer is great at tinkering in the kitchen, but right now they’re standing in front of a huge, dark warehouse full of potential ingredients. They could either start grabbing them haphazardly and dragging them back to their kitchens, or they could send a sprinter armed with a flashlight through the warehouse first. Your analyst is the sprinter; their ability to quickly help you see and summarize what-is-here is a superpower for your process. The analyst-ML expert hybrid Analysts accelerate machine learning projects, so dual skillsets are very useful. Unfortunately, because of the differences in coding style and approach between analytics and ML engineering, it’s unusual to see peak expertise in one individual (and even rarer to that person to be slow and philosophical when needed, which is why the true full-stack data scientist is a rare beast indeed). Dangers of chronic under-appreciation An expert analyst is not a shoddy version of the machine learning engineer, their coding style is optimized for speed — on purpose. Nor are they a bad statistician, since they don’t deal at all with uncertainty, they deal with facts. “Here’s what’s in our data, it’s not my job to talk about what it means beyond the present data, but perhaps it will inspire the decision-maker to pursue the question with a statistician…” What beginners don’t realize is that the work requires top analysts to have a better grasp of the mathematics of data science than either of the other applied breeds. Unless the task is complicated enough that it demands the invention a new hypothesis test or algorithm (the work of researchers), statisticians and ML specialists can rely on checking that off-the-shelf packages and tests are right for the job, but they can often skip having to face the equations themselves. For example, statisticians might forget the equations for a t-test’s p-value because they get it by hitting run on a software package, but they never forget how and when to use one, as well as the correct philosophical interpretation of the results. Analysts, on the other hand, aren’t looking to interpret. They’re after a view into the shape of a gory, huge, multidimensional dataset. By knowing the way the equation for the p-value slices their dataset, they can form a reverse view of what the patterns in original dataset must have been to produce the number they saw. Without an appreciation of the math, you don’t get that view. Unlike a statistician, though, they don’t care if the t-test is right for the data. They care that the t-test gives them a useful view of what’s going on in the current dataset. The distinction is subtle, but it’s important. Statisticians deal with things outside the data, while analysts stick to things inside it. At peak excellence, both are deeply mathematical and they often use the same equations, but their jobs are entirely different. Similarly, analysts often use machine learning algorithms to slice their data, identify compelling groupings, and examine anomalies. Since their goal is not performance but inspiration, their approach is different and might appear sloppy to the ML engineer. Again, it’s the use of the same tool for a different job. To summarize what’s going on with an analogy: pins are used by surgeons, tailors, and office workers. That doesn’t mean the jobs are the same or even comparable, and it would be dangerous to encourage all your tailors and office workers to study surgery to progress in their careers. The only roles every business needs are decision-makers and analysts. If you lose your analysts, who will help you figure out which problems are worth solving? If you overemphasize hiring and rewarding skills in machine learning and statistics, you’ll lose your analysts. Who will help you figure out which problems are worth solving then? You’ll be left with a group of miserable experts who keep being asked to work on worthless projects or analytics tasks they didn’t sign up for. Your data will lie around useless. Care and feeding of researchers If this doesn’t sound bad enough, many leaders try to hire PhDs and overemphasize research — as opposed to applied — versions of the statistician and ML engineer… without having a problem that is valuable, important, and known to be impossible to solve with all the existing algorithms out there. That’s only okay if you’re investing in a research division and you’re not planning to ask your researchers what they’ve done for you lately. Research for research’s sake is a high-risk investment and very few companies can afford it, because getting nothing of value out of it is a very real possibility. Researchers only belong outside of a research division if you have appropriate problems for them to solve — their skillset is creating new algorithms and tests from scratch where an off-the-shelf version doesn’t exist — otherwise they’ll experience a bleak Sisyphean spiral (which would be entirely your fault, not theirs). Researchers typically spend over a decade in training, which merits at least the respect of not being put to work on completely irrelevant tasks. When in doubt, hire analysts before other roles. As a result, the right time to hire them to an applied project tends to be after your analysts helped you identify a valuable project and attempts to complete it with applied data scientists have already failed. That’s when you bring on the professional inventors. The punchline When in doubt, hire analysts before other roles. Appreciate them and reward them. Encourage them to grow to the heights of their chosen career (and not someone else’s). Of the cast of characters mentioned in this story, the only ones every business with data needs are decision-makers and analysts. The others you’ll only be able to use when you know exactly what you need them for. Start with analytics and be proud of your newfound ability to open your eyes to the rich and beautiful information in front of you. Inspiration is a powerful thing and not to be sniffed at. VICKI JAURON, BABYLON AND BEYOND PHOTOGRAPHY/GETTY IMAGES image used with the HBR article. My favorite interpretation is that the human is a business leader chasing away flocks of analysts while trying to catch the trendy job titles. If you enjoyed this article, check out my field guide to the data science universe here.
https://towardsdatascience.com/data-sciences-most-misunderstood-hero-2705da366f40
['Cassie Kozyrkov']
2019-10-19 15:57:45.445000+00:00
['Analytics', 'Towards Data Science', 'Artificial Intelligence', 'Technology', 'Data Science']
540
The Future of Mobile App Development: 5 Trends for 2021
The Future of Mobile App Development: 5 Trends for 2021 zara Jun 8·4 min read Enterprises always lookout for the latest technology trends to stay ahead of the competition and mobile application development is not an exception. Nevertheless, the use of enterprise mobile applications against the download rate is always debated by experts. To provide clarity, in this post, I have provided information about the growth and future of mobile application development despite decline in downloads. Also, I have shared 5 mobile app development trends for 2021, which many may find useful. Let’s get started with enterprise mobile app downloads vs. use statistics. According to a Comscore report, a majority of users (51%) still don’t download any apps in a month and the trend is set to continue. So, does that mean the growth of mobile applications is declining? Certainly not! However, the way mobile apps consumed varies across enterprises has changed. Well, the sections below will help both the enterprises, mobile, digital transformation services and web app development companies. Here are the 5 app development trends to look for in 2021. A prime reason why mobile apps are widely leveraged is because they can integrate with advanced technologies like AI, IoT, ML, Cloud, etc. So, mobile applications are here to stay at least the next few years. 5 Enterprise mobile app development trends that will dominate in 2021 1. IoT the future of mobile app development Internet of things is growing at a breakneck pace as they provide control over humans and equipment. The extension of IoT data with mobile apps empowers users with real-time data of human and equipment on-the-go, only to improve process efficiency. IoT applications have already started to impact enterprises and top brands have started to invest in the technology revolution to provide a seamless connected environment to users. As per Statista, Internet of Things (IoT) connected devices installed base worldwide from 2015 to 2025 (in billions) 2. Role of Artificial Intelligence in mobile app development The introduction of Artificial Intelligence into the technology space has dramatically transformed the way most businesses function. The AI-powered apps are widely used by enterprises to create a smarter user experience with fewer resources, leading to exploding productivity growth and improved cost savings. Besides, the customers are sensing better in-depth and personalized mobile experience like never before. Shared below is an interesting report from Statista on how smartphone users will be benefited using AI. 3. Influence of wearable technology on mobile app development Enterprises focus on apps (thanks to custom software development) that connect with wearable gadgets to deliver information in new ways. This will transform the large range of products and services in different industry spaces like sports, fitness, fashion, hobbies and healthcare. Connecting wearable devices with smartphones impact the future generation of mobile application development strategies and will pave the way to new waves of applications that will inspire and improve user experience staggeringly. As per Grand View Research, the global wearable technology market size was valued at over USD 18 billion in 2014, owing to rapid adoption worldwide. Increasing consumer awareness and a rising technically sound population is also anticipated to drive demand over the forecast period. 4. Add a chatbot to your next mobile app Chatbots combined with mobile apps are creating ripples in the enterprise arena as a combination of both helps enterprises garner a large volume of user data to create a personalized approach towards users for delivering a seamless experience. Here are some interesting use cases of chatbots for enterprises. 5. Benefits of Cloud backend for mobile apps development The surge in enterprise mobile applications will contribute to challenging storage space. And, cloud storage is the best option available to overcome this challenge. Moreover, Cloud services will make data accumulation seamless for your business. Besides, security measures and management will become simple and easier. In addition to consumers, cloud-based companies will make increased profits in this industry space. According to a report, Cloud app development market size will surge to 101.3 billion USD in 2022.
https://medium.com/@zarajohn/the-future-of-mobile-app-development-5-trends-for-2021-fc41dcbf7459
[]
2021-06-08 19:50:14.043000+00:00
['5 Trends', 'Mobile Development', 'Mobile Apps', 'Mobile App Development', 'Future Technology']
541
Comprehensive Pi Network Cryptocurrency Review
In short, Pi is the first and only cryptocurrency that you can mine on your mobile phone using either the Android or iOS app without draining your battery. I was dubious about the battery life claim, but having experimented and mined with it for a number of months now, I haven’t noticed any significant change to the battery life on my iPhone. This comes down to how Pi actually operates. Can you mine Pi? People are interested in the concept of mining cryptocurrency, or in other words, receiving a reward for contributing something to the network. Pi facilitates this with the idea of mining from your mobile phone. However, unlike the proof-of-work method of mining employed by Bitcoin, Pi uses a consensus algorithm based on the Stellar Consensus Protocol and what is called a “Federated Byzantine Agreement”. A Federated Byzantine Agreement works by having a number of nodes agree amongst themselves on what block should be next in the blockchain. So instead of competing against each other in a Bitcoin-style proof-of-work, the miners are working together to come to agreement. The agreement approach requires more communication between miners but uses very little power, making it more economical for everyday people to be involved in mining. So, yes, you can mine Pi. However, in order to make Pi mining possible on mobile devices, a number of different roles exist that all contribute to mining in their own way. Pioneers A pioneer is the most basic user type. This is someone who accesses Pi using the mobile app. Each time a pioneer opens up the app and chooses to mine for 24 hours, they are validated and can request transactions. Contributors A contributor is someone who is providing a list of trusted pioneers to the Pi network. In other words, the contributor knows and trusts this circle of people. This is their security circle. In return for providing this circle of trust to the network, contributors earn additional Pi when they mine. The security circles created by contributors all over the world combine to form a trust network. Contributors also operate using the mobile Pi Network app. Ambassadors Ambassadors are people who bring other users into the Pi network. In return for bringing in those new people, ambassadors can also earn additional Pi when they mine. This is what has caused some people to suggest that Pi could be a pyramid scheme. In practice though, it is more like an affiliate program where the referrals are required in order to improve the security and trust of the overall network. Ambassadors operate on the mobile Pi Network app as well. Nodes Nodes are where things get a little different. A node is someone who is a pioneer and a contributor but who is also operating the Pi node software on their computer. A Pi node is where the actual algorithm operates and these nodes take into account the trust data that is being provided to them by contributors. At the time of writing, nodes are in test phase. Getting started with Pi To create a Pi wallet, download the Pi Network app for either Android or iOS and when you go through the signup process, use my invite code “ mjbi “ to join my security circle — this will automatically boost your Pi earning rate each time you mine Pi, and yes, it will also increase my Pi earning rate as well. Invite code: mjbi Follow the instructions on the signup form to setup your account and to enter my invite code. You must verify your account with a phone number. This is important for the trust of the network. You can only have one Pi wallet (or account) and it can only exist on one device. The point is that each Pioneer is a unique individual. That individual is trustworthy and has been validated as being a real person. How to mine Pi on your phone Once you have your account setup, you will see a screen like this: Screenshot by Matthew Brown from the Pi Network App Every time you open the app, if it’s been more than 24 hours since the last time you began mining, it will prompt you to tap the Mine Pi icon to resume mining. It’s important to note that you must tap to mine every 24 hours. While this is a bit annoying, it is part of the process of maintaining a security circle that in turn facilitates the network security. After you have tapped the Mine Pi icon, you are prompted to confirm your security circle. The more members of your security circle, the more you will earn. You can add new members by clicking the “Edit” button, or you can begin mining by clicking the “Mine” button. Screenshot by Matthew Brown from the Pi Network App If you make any changes to your security circle after you begin mining, they won’t contribute to your earn rate until the next mining period. You can join my security circle by entering my invite code when you sign up: MJBI Sign up with Pi here. After you commence mining, Pi will confirm your earning rate based on your security circle and you can close the app. Screenshot by Matthew Brown from the Pi Network App It’s worth noting that the ability to mine Pi may cease when Pi moves to the Mainnet stage. Currently, the team is exploring possibilities and community input as to how mining might continue to be a part of the network. Any coins mined prior to the Mainnet stage will be transferred to your Mainnet account, so you won’t lose anything. Improve your security circle You can improve your security circle in two ways: Invite existing Pi users who you know (from your phone contacts list) Invite new people to join the Pi network, preferably people you know. You can SMS invitations to your contacts from directly within the Pi app. Regardless of which way you go about improving your security circle, the more members you have in your circle, the higher your Contributor bonus will be. Screenshot by Matthew Brown from the Pi Network App You can see in the screenshot below that I am earning 0.2 Pi per hour from my Pioneer role. I’m then earning a bonus 0.04 Pi per hour from my 1 security connection. The more security connections I add, the larger my circle and the bigger my earnings bonus will be. If you invite new users to the Pi network, they will also generate an Ambassador bonus for you in addition to a Contributor bonus. To get started mining Pi with an instant Contributor bonus, make sure that when you sign up for Pi you use a referral code. I’d appreciate it if you used mine! My Pi invitation code is: mjbi Can you convert or exchange your Pi to cash or other cryptocurrency? Right now, no. Pi is still in the Phase 2 of development, the Testnet phase. This means it is not connected to any external networks and is still being developed and tested. As of December 2020, Pi surpassed the 10 million user mark though, which is a major milestone. Once Pi enters Phase 3, the Mainnet, that is when connections to exchanges will begin to occur. At that time, you will be able to start exchanging Pi for other currencies. So, when will Pi move to Phase 3, Mainnet? An announcement was made within the Pi app on December 8, 2020, when Pi reach the 10 million user mark, the Pi team made the decision to set a roadmap for the path to Mainnet within 1 year. In other words, by the end of 2021, Pi should be on Phase 3, Mainnet, and you should soon after be able to connect to exchanges. That said, the Pi app is intended to be a globally accessible peer-to-peer marketplace that can have apps built on top of it. Apps and other means of exchange are necessary for Pi to achieve its vision, so these options may begin to be developed before Mainnet is reached. See the Pi roadmap for more information on each phase. Pros and Cons of Pi Pros of Pi Easy to mine. Fun to be part of a cryptocurrency network and community that is in development. It’s free. Community feedback is helping to shape the future of Pi. There is a roadmap outlining the plans for Pi. The development team is active in the community. Cons of Pi It’s not currently exchangeable for anything. Being in development there is a risk that Pi may never go anywhere and you may never be able to do anything with your coins. By default the app shows your name to your security circle. I suggest changing this in your settings. You have to manually tap to mine every day. Other things to note about Pi Data you share in the community section of the app is not private, it is a forum. Be careful what you share. Pi is built around trust and so there is some assumption that you know the people in your security circle. As such, there is capability to message and chat with the people in your circle. Be careful what you share, especially if you don’t know all the people there. There are a lot of scams going around trying to take advantage of people who want to convert their Pi into something else. This isn’t Pi’s fault, but it is something to be very careful of. You should only listen to messages from Pi via the official Pi channel through the Pi app. What do you think about Pi? I’ve been mining Pi with just one security circle connection for about 6 months now. It’s been interesting experience to be part of a blockchain network that is in development. I’m interested to begin experimenting with the node aspect of Pi and building out my security circle further. I’m also keen to hear what you think about it. Have you used Pi? Do you still use it? Let me know your thoughts in the comments below! Recommended crypto app I recommend Crypto.com as a simple way to get started buying and selling cryptocurrencies as well as earning interest on your balances. Open an account on the app here and we both get $25 USD once you meet the staking requirement or open an account on the exchange here and we both get $50 USD once you meet the staking requirement! Open an account on both to get both bonuses. Just make sure that wherever you sign up, you use my invite code h9tfssd364 to be eligible for the signup bonus. Crypto.com referral code: h9tfssd364 Crypto.com is the first cryptocurrency company in the world to have ISO/IEC 27701:2019, CCSS Level 3, ISO27001:2013 and PCI:DSS 3.2.1, Level 1 compliance, and independently assessed at Tier 4, the highest level for both NIST Cybersecurity and Privacy Frameworks. Crypto.com holds an Australian Financial Services License, making it one of the only crypto companies in the world that is licensed in Australia. References
https://medium.com/@matthewjbrown/comprehensive-pi-network-cryptocurrency-review-2573adc864eb
['Matthew Brown']
2021-01-29 00:34:46.254000+00:00
['Pi Network', 'Cryptocurrency', 'Mobile Apps', 'Technology', 'Crypto']
542
Nigeria: Trueflutter Dating App Raises Funding
Nigerian dating app, Trueflutter has announced an undisclosed amount of funding from three local angel networks which are; Lagos Angel Network, SSE Angel Network and SGC7375 Angels. Trueflutter is a startup that was co-founded in 2018 by Dare Olatoye to connect African singles on the continent and beyond. It uses advanced matchmaking algorithms to help users find partners compatible with their African culture and value systems for either a long term relationship or commitment. Trueflutter says it plans to use the investment to expand its growth across Africa “We know our people, understand their preferences and have put that knowledge into building the world’s most culturally compatible matchmaking app. The backing of these Networks is an important milestone in our mission to make Trueflutter platform available to Africans globally. The investment will help us accelerate our ambitious growth plans and take it to Africans across the globe”, said Dare Olatoye, Trueflutter’s co-founder and chief executive officer (CEO). A spokesperson for SSE Angel Network, Tokoni Amiesimaka added that, “We are pleased to support Dare and his team’s customer-centric approach and focus on building a secure, culturally-intelligent, and globally accessible dating platform for Africans.” “With the widespread adoption of digital social platforms, we believe our investment will empower the Trueflutter team to help African singles, especially millennials, to initiate and build lasting relationships online.” For security reasons, the app ensures that prostitutes and fraudsters do not populate the app. Additionally the safety of women who physically meet men from the app is assured. Users have to update their profile pictures through the selfie mode only in order to ensure that the owner of the account is the one whose picture has been uploaded. The app has an in-built audio and video call features for communication between the users. Moreover, users can use the advanced privacy settings to choose to be discoverable by a select few, everyone or no one. Before considering a match with someone, you can listen to the person’s audio bio to hear what he or she sounds like. You can then send or receive invites from potential dates. Trueflutter recorded its first 2,000 signups the first 6 weeks and hit the 6,000 user mark in the sixth month. The mobile app is currently being redeveloped and can be found on Android and iOS when it is released.
https://medium.com/@digitaltimes-2020/nigeria-trueflutter-dating-app-raises-funding-5293aa5746ff
['Digital Times Africa']
2020-12-15 17:22:01.870000+00:00
['Technology', 'Nigeria', 'Startup', 'Funding', 'Dating App']
543
Pernicious Plastics #1
I’m not a very patient person so when I am shopping in 7–11 and their tannoy is trying to #greenwash me into (more) mindless shopping I start thinking about practical solutions to the plastic problem. I love Precious Plastics (and we are setting it up at Western Academy of Beijing.) So, bear with me… here’s a short sequence of considerations building up to a solution. We already know… Around 8 million tonnes of plastics are carried into our oceans each year — this is the downside of a consumer boom wrapped in small colourful pieces of plastic. From noodle packets to coffee sachets the stuff we buy is wrapped for convenience and attractiveness in colourful thin-film plastics which end up in the sea. This sucks. That BIC lighter looked so tasty. Now my chicks won’t survive and reproduce. UNESCO says that plastic waste kills a million seabirds every year, (and 100,000 marine mammals). 20 rivers produce 90%* of that total. 80% of seaborne plastic waste comes straight offa the land — rain washes it into street culverts and it follows nature downstream. * Down on the sunny banks of the Yangtze, Indus, Yellow, Hai He, Ganges, Pearl, Amur, Mekong, Nile, or Niger. 7–11 Saves the Mariniverse! (not) 7–11 doing away with single-use plastic shopping bags — so that’s one bag fewer in a shop with 25,000 different things wrapped in single-use plastics. Sorry about the sound quality. Maybe my phone has Novel Corona Virus. So what should folk do to tackle the problem? Tackle it at source! Baltimore Harbour’s solution: Mr Trash Wheel Baltimore has fixed the problem with a simple bit of kit that takes up little space, advertises the value of community-based proactive solutions and generates useful recyclables — behold Mr Trash Wheel! We love Mr Trash Wheel. You can find him/her/it here. More still about this cool project, which cost the local harbour authority and some private partners back by $720k or so. (Here’s the Wiki.) If 20 rivers produce 90% of the polluton, then we need 800 Mr Trash Wheels ! You can build this from junk, kids! Where there’s muck there’s brass as the trash-pickers sing. The estuaries are crawling with plastic pickers. When plastic waste is gathered recycled and repurposed locally (as Precious Plastics demonstrates), then there’s every reason to expect that small local economies could thrive by protecting and maintaining floating trash-catchers. What will it take? Well a good start would perhaps be one on each of the 40 major tributaries of those 20 rivers? If Mr TP is the prototype then the VERY MOST that the production model should cost is 1/3rd of that total — and the least is perhaps 1/15th. $200 million is a lot of money (unless you count in mansions, footballers or dead seagulls.) What else can you buy for that much money? A loft mansion in LA. Or a celebrity soccer player from Brazil… Mr Neymar Jr might even help if you asked him nicely, He’s not a known hater of marine life, Solution time:- Choose any river. If we install a few dozen around the world then the local authorities which govern those top 20 polluters will find us hard to ignore. Get some buddies together and Crowdfund (unless you are flush and can afford to do it without help, eh, Mr Neymar Jr, Mr 7–11). Build a new sibling for Mr Trash Picker. Open-source the plans. Show others how easy this is. If I get 100 people to support this idea then I will begin doing the above. So, are you in?
https://medium.com/@Canyudo/pernicious-plastics-1-1fe2a21b4bd8
[]
2020-02-07 15:01:05.716000+00:00
['Solutions', 'Crowdfunding', 'Pollution', 'Plastic Pollution', 'Appropriate Technology']
544
The Real Story of Automation Beginning with One Simple Chart
The Real Story of Automation Beginning with One Simple Chart Robots are hiding in plain sight. It’s time we stop ignoring them. There’s a chart I came across in 2017, and not only does it tell an extremely important story about automation, but it also tells a story about the state of the automation discussion itself. It even reveals how we can expect both automation and the discussion around automation to continue unfolding in the years ahead. The chart is a plot of oil rigs in the United States compared to the number of workers the oil industry employs, and it’s an important part of a puzzle that needs to be pieced together before it’s too late. What should be immediately apparent is that as the number of oil rigs declined due to falling oil prices, so did the number of workers the oil industry employed. But when the number of oil rigs began to rebound, the number of workers employed didn’t. That observation itself should be extremely interesting to anyone debating whether technological unemployment exists or not, but there’s even more to glean from this chart. First, have you even heard of automated oil rigs, or are they new to you? They’re called “Iron Roughnecks” and they automate the extremely repetitive task of connecting drill pipe segments to each other as they’re shoved deep into the Earth. Pictured: National Oilwell Varco’s AR3200 Automated Iron Roughneck Thanks to automated drilling, a once dangerous and very laborious task now requires fewer people to accomplish. Automation of oil rigs means that one rig can do more with fewer workers. In fact, it’s expected that what once took a crew of 20 will soon take a crew of 5. The application of new technologies to oil drilling means that of the 440,000 jobs lost in the global downturn, as many as 220,000 of those jobs may never come back. Now look back at the chart again, and notice how quickly this all happened. It took TWO YEARS. How did it happen so fast? Because the oil industry didn’t really need the workers it lost in the first place. It’s the oil industry. It’s used to making lots of money, and when you’re making money hand over fist, you don’t need to focus on efficiency. Being lean and mean is not your concern. However, that changes when times get tough, and times got very tough for the oil industry as oil prices plummeted thanks to new competition from yet another technological advancement — fracking. So once it became important to increase efficiency, that’s exactly what the oil industry did. It let people go and it invested in automation. In the summer of 2016, oil prices were no longer under $30 per barrel, and had gone back up to around $50 per barrel where they remain. That’s half of the $100 per barrel they’d gotten used to, which is fine as long as they’re able to produce at twice the efficiency. As a result, like a phoenix rising, they emerged transformed. Oil rigs returned to drilling, but all the rig workers didn’t. Those who were let go became simply unnecessary overhead. Sleeping Through a Wake Up Call This is a story of technological unemployment that is crystal clear, and yet people are still arguing about it like it’s something that may or may not happen in the future. It’s actually a very similar situation to climate change, where the effects are right in our faces, but it’s still considered a debate. Automation is real, folks. Companies are actively investing in automation because it means they can produce more at a lower cost. That’s good for business. Wages, salaries, and benefits are all just overhead that can be eliminated by use of machines. But hey, don’t worry, right? Because everyone unemployed by machines will find better jobs elsewhere that pay even more… Well, about that, that’s not at all what the history of automation in the computer age over the past 40 years shows. Yes, some with highly valued skills go on to get better jobs, but they are very much the minority. Most people end up finding new paid work that requires less skill, and thus pays less. The job market is steadily polarizing. Decade after decade, medium-skill manufacturing/office jobs have been disappearing, and in response, the unemployed have found new employment in new low-skill service jobs. People unemployed by machines still require income, so they end up finding what they can get. At the same time, they are competing against others doing the same thing (as long as the labor market remains involuntary) and thus people are bidding down their own wages and taking any job they can get in a race against the machines. This also serves to make investments in automation less attractive. As an added bonus, the jobs that are being automated are more productive jobs than most of the jobs being newly created. Cheaper human labor and an increasing number of low productivity jobs together then result in a “paradoxical” deceleration of productivity growth. Long story short, the middle of the labor market is disappearing. That’s the reality, and it’s been happening for decades. A landmark 2017 study even looked at the impact of just industrial robots on jobs from 1993 to 2007 and found that every new robot replaced around 5.6 workers, and every additional robot per 1,000 workers reduced the percentage of the total population employed by 0.34% and also reduced wages by 0.5%. During that 14-year period of time, the number of industrial robots quadrupled and between 360,000 and 670,000 jobs were erased. And as the authors noted, “Interestingly, and perhaps surprisingly, we do not find positive and offsetting employment gains in any occupation or education groups.” In other words, the jobs were not replaced with new jobs. It’s expected that our industrial robot workforce will quadruple again by 2025 to 7 robots per 1,000 workers. (In Toledo and Detroit it’s already 9 robots per 1,000 workers) Using Acemoglu’s and Restropo’s findings, that translates to a loss of up to 3.4 million jobs by 2025, alongside depressed wage growth of up to 2.6%, and a drop in the employment-to-population ratio of up to 1.76 percentage points. Remember, we’re talking about industrial robots only, not all robots, and not any software, especially not AI. So what we can expect from all technology combined is undoubtedly larger than the above estimates. Automation has been happening right under everyone’s noses, but people are only beginning to really talk about the potential future dangers of automation reducing the incomes of large percentages of the population. In the US, the most cited estimate is the loss of half of all existing jobs by the early 2030s. It’s great that this conversation is finally beginning, but most people have no idea that it’s already happening. And about half of those people who know it’s happening, are relying on magical thinking to support their beliefs that automation is of no concern. To the contrary, it is of massive concern. Charting the Course of History One of the most telling statistics I’ve come across in regards to the automation discussion is how almost everyone in the US knows we’ve lost manufacturing jobs over the past three decades. 81% know that very real fact according to a poll of over 4,000 adults by Pew Research. What few people know however is that at the same time the total number of jobs has decreased, total manufacturing output has increased. The US is manufacturing more now than it ever has, and only 35% of the country knows that’s true. The percentage of Americans who know both of the above facts are true is a mere 26%. Only one out of every four Americans knows that thanks to technology, we’re producing as a country far more with less. Most people don’t know that, or blame things like immigrants or offshoring for job losses, even though offshoring is only possible due to technology improvements and only accounts for 13% of manufacturing job loss. That’s a problem. We can’t make the changes we need to make if people aren’t aware the problem exists, or think the existence of the problem is something to be debated. We can’t agree on solutions like unconditionally guaranteeing everyone a basic income as a rightful productivity dividend if people are actively being unemployed by growing productivity and the discussion is framed as a future danger to our social fabric instead of a clear and present danger. Consider this: What happens when the next recession hits? Falling oil prices simulated a recession in the oil industry, which responded with mass unemployment and investment in automation. What happens when all industries respond with mass unemployment and investments in automation? If we look at recent history, each successive downturn has resulted in the permanent shrinkage of the labor market. Peak labor appears to have already occurred back in 2000. Meanwhile, technology is only getting cheaper, so each successive drop squeezes out more human labor, and is able to automate more lower-skill labor that is newly more expensive than machines. Expect the next recession to put over ten million of people out of work, and for the economy to realize they didn’t really need those people as workers after all to produce what was being produced. Where 79% of eligible workers aged 25–54 were employed, expect that to fall to 69% or below. The economy simply doesn’t need the number of people it currently employs with the technology we already have available. To add insult to injury, it’s taking longer for the unemployed to find employment, so those suffering next will suffer longer. The Automation of an Increasingly Divided Country To add further insult to injury, the story of automation in America is one where mostly liberal metro areas enjoy the benefits while mostly conservative rural areas suffer the consequences. According to a Daily Yonder analysis, 80% of jobs created in 2016 were in the 51 metro areas of a million people or more. These metro areas gained 1.2 million jobs between January 2016 and 2017 — just one year. Meanwhile, rural areas ended up with 90,000 fewer jobs over the same time span. More than 52 million Americans are now currently living in counties considered as being in economic distress. A report by the Economic Innovation Group (EIG), a bipartisan research and advocacy organization, discovered a close link between community size and prosperity, where counties with under 100,000 people are 11 times more likely to be distressed than counties with more than 100,000 people. Understand how automation is helping to divide the country along “red” and “blue” lines, and the growing polarization of our politics will immediately make more sense. This is perhaps the most dangerous effect of technological unemployment of all, the erosion of democracy itself as partisanship tears our nation apart in a process that is difficult to differentiate between that of a living cell mitotically dividing in two. The Ignored Math of Increasingly Productive New Businesses Another thing to recognize is that as technology enables businesses to hire fewer workers, that means to obtain “full employment”, where everyone who wants a job has one, the economy requires that everyone work shorter work weeks, or else an ever growing number of businesses is needed in order to employ the same amount of people. If the average amount of people the average business employs is 10, it would take 10 businesses to employ 100 people. Assuming “full-time” continues to be defined as 40 hours, if technology allows 1 person to do the work of 10, the average people a business employs drops to 1, so the number of businesses needed to employ everyone grows to 100. (note: compensating for population growth would require even more than 100) So is that happening? No. That’s not happening. The reverse is happening. New business creation is slowing, not accelerating. But the new businesses being created are rising in value every year, which matches what we’d expect to see from each new business using the latest technology to do far more with far less. Every year there are more new businesses worth over $1 billion. Look at Tesla versus early 20th century Ford Motors, or Instagram versus Kodak, or Facebook versus all the newspapers that used to exist. These companies are worth hundreds of billions of dollars and employ a fraction of the people as the most valued companies of the past once did. There’s also an assumption that human demand is infinite, and so no matter how many jobs are eliminated by technology, a human demand for infinitely more stuff will always create new jobs. This belief exists alongside continually shrinking discretionary spending as a share of total spending. This should also not surprise anyone. People can’t spend money if they don’t have any. The More You Know… Now, let me ask you a question. How much all of the above information were you already fully aware of, to the point none of it was new to you? Now ask yourself why? Automation clearly exists, and clearly is already affecting the economy. So why is the debate about automation even a debate at all? That’s perhaps the scariest thing about the oil rig automation chart, along with the rest of the charts I’ve included here, the fact existing evidence is not part of the debate. Just as climate change has been something we’ve debated for decades while the effects have only grown more extreme, so too is automation being denied as it grows more extreme. My fear is that ignoring the problem will continue. Why not? We’ve ignored manufacturing being automated. Yes, we know it happened, but we’ve pretended that everyone just went on to find new paid work, without critically evaluating the nature of that paid work. Unemployment isn’t a problem, right, because the unemployment rate is at a record low? Tell that to the person who went from a 40-hour per week career with benefits and a sense of security to three different jobs/gigs without any benefits, working 80 hours per week to earn less total income in a far more insecure life just trying to get by each month. Tell that to the person who feels marriage has become something only the rich can afford any more. Tell that to the person who attempted suicide, or self-medicated their depression with opioids after their town’s manufacturing plant closed down, obliterating their town’s local economy and leaving them with no means of paying others for their own existence. Technological unemployment is real.The only honest debate to be had is over the nature of re-employment, and all evidence points to a shrinking employment-to-population ratio, a growth in low-skill jobs, a transition to alternative work arrangements like temporary and “gig” labor, rising variance in monthly incomes, erosion of benefits, longer terms of unemployment, and what can only be called a pandemic of economic insecurity as survival — instead of the American Dream — increasingly becomes the primary goal of the majority of Americans. Meanwhile, some other Americans are doing extremely well. Why? Because they own the machines. They own the lobbyists. They write the laws. They write the tax code. They have the power. And so they are now the sole beneficiaries of the machine labor producing greater and greater amounts of national wealth, where once that wealth was more widely shared with those producing it. Hundreds of thousands of jobs were just lost due to oil rig automation and no one (except for those unemployed and their families) batted an eye. Hundreds of thousands of jobs have been lost over the years to industrial robots. Hundreds of thousands more jobs were just lost this year in retail due to the unstoppable efficiency of Amazon and the more than 100,000 robots it employs. And yes, Amazon is creating lots of new jobs too, but for every job it creates, it has eliminated two or more by eliminating its far less efficient brick and mortar competition. Unless you’re talking about net job creation, and the details of those jobs created, you’re dishonestly talking about employment. The question is, at what point will enough people recognize that automation is a very real problem that must be confronted immediately. When millions of trucking jobs are automated? When millions more retail jobs are automated? How many jobs need to be erased before we collectively create the will to do something? And at what point will we recognize that the problem of automation should not be a problem at all, and that we want as much work automated as possible? When will we realize that automation is a blessing, not a curse, and that the benefits of machine labor should be spread across all of society instead of concentrated in the hands of a relative few, especially when all the technology originated from taxpayer-funded R&D and represents a technological inheritance from those long dead who passed their knowledge down to us generation after generation? And when will the so far sole beneficiaries of that inheritance realize that although they need fewer laborers, they still need consumers? I hope that time is soon, very soon, because I look around at our reality, and I wonder if we’re going to get our act together before it’s too late, if it isn’t already. As long as we force each other to work for money in order to live, automation will work against us instead of for us. It is a civilizational imperative that we decouple income from work so as to create economic freedom for all. Without an unconditional basic income, the future is a very dark place. With unconditional basic income, especially one that rises as productivity rises as a rightful share of an increasingly automating economy, the future is finally a place for humanity.
https://medium.com/basic-income/the-real-story-of-automation-beginning-with-one-simple-chart-8b95f9bad71b
['Scott Santens']
2018-12-14 22:03:40.356000+00:00
['Tech', 'Politics', 'Basic Income', 'Automation', 'Technology']
545
From this Year’s Annual Report: Making Benefits Eligibility Information More Accessible
Katie Zeng — NYC Opportunity Many New Yorkers are not aware of government benefits for which they are eligible. To make benefits information more widely available, NYC Opportunity released the first-ever NYC Benefits Screening Application Programming Interface (API). It makes eligibility criteria for 30+ social service benefits easily available to anyone using or creating other technology-based tools. This expands the network and accuracy of tools New Yorkers and social service professionals can use to find benefits. New Yorkers can go to ACCESS NYC, the City’s award-winning online benefits screening platform, and answer a set of questions to find out what benefits they may be eligible for and how to apply. The API extends the reach of ACCESS NYC by making the programming rules that produce ACCESS NYC’s screening results available for use in other technology applications. Community-based and civic technology organizations and other groups can use the API to make benefits screening part of their existing technology tools, making it easier to advise their clients on which benefits to apply for and how to do so. The API is an innovation in the benefits landscape, demonstrating how technology can make crucial poverty-fighting tools more widely available and empowering more organizations to support their clients in new ways. You can read the full annual report here.
https://medium.com/nyc-opportunity/from-this-years-annual-report-making-benefits-eligibility-information-more-accessible-eb0735efe971
['Nyc Opportunity']
2020-02-07 16:16:01.366000+00:00
['API', 'NYC', 'Technology', 'Government', 'Innovation']
546
QuarkChain Monthly Project Progress Report: July, 2020
Welcome to the 55th QuarkChain Monthly Report. This is our July issue. We will post monthly reports including development progress, monthly news, and events at the end of each month. In the future, QuarkChain strives to do better. Let’s review what happened in the past month! QuarkChain Environmental Governance Blockchain Platform will be used for Chinese domestic construction resource management and trading. QuarkChain reached in-depth technical cooperation with AWS. QSwap is under development: New product of decentralized exchanges with native-token function. DeFi games are under development. 1) Development Progress Major Updates 1.1 Added Added JSON RPC to support total token balance counting Added total token balance counting script Integrated prometheus with pyquarkchain for monitoring function 1.2 Updated Updated weekly checkdb timeout parameter for improvement Update devnet config to enable native token system contracts by default Fixed transaction receipt contract address in case of rlp encoding failure 2) Recent news 2.1 QuarkChain announced a strategic cooperation with the official government to jointly develop Environment Governance Platform on Blockchain QuarkChain announced official cooperation with an ecological department under a Chinese province government. Both parties will work together on developing an Environmental Governance Platform on Blockchain (hereinafter referred to as “EGPB”), which will be firstly used for a northwest province in China for construction resource management and trading. Read more: QuarkChain and Amazon Web Service reached in-depth technical cooperation and launched the Enterprise BaaS Platform 2.2 QuarkChain and Amazon Web Service reached in-depth technical cooperation and launched the Enterprise BaaS Platform Recently, QuarkChain reached in-depth technical cooperation with Amazon Web Services (AWS) and launched an Enterprise of the high-performance and highly flexible “Blockchain as a Service” (BaaS) one-stop application platform on AWS. The clients can build applications with higher availability and more customized applications on the platform. Read more: QuarkChain announced a strategic cooperation with the official government to jointly develop Environment Governance Platform on Blockchain 2.3 QuarkChain open-source code will be kept in the Arctic for the next 1000 years GitHub announced its Arctic Code Vault plan, and QuarkChain has been selected. Our open-source code will be kept in the Arctic for the next 1000 years. All public repositories are stored in the form of snapshots in Svalbard, Norway, where is also the global seed bank for the preservation of human food seeds. 3) Events 3.1 QuarkChain founder and CEO Dr. Qi Zhou introduced the next-generation DeFi platform at the 2020 On-Chain Fintech Conference in Seoul, South Korea QuarkChain founder and CEO Dr. Qi Zhou gave a speech on building the next generation of DeFi platform. In his speech, he talked that 1. The motivation of building next generation of DeFi platform 2. Challenges and opportunities 3. Contributions QuarkChain makes to the revolution Click here to watch the full video: 3.2 CryptoMurMur Interview with Qi Zhou QuarkChain participated in the 2020 On-Chain Fintech online conference held in Seoul, South Korea, on July 24th. QuarkChain founder and CEO Dr. Qi Zhou attended the conference in the form of video connection and gave a speech titled “Building the next-generation DeFi platform” online sharing. The purpose of this conference is to discuss how applications on the blockchain will affect the future development of financial technology. Korean relevant associations in the economic field, regulatory agencies, several major banks, and global Internet companies all attended the meeting. The conference topics covered many financial technology fields. NH Nonghyup Bank, Seoul City Council, Woori Bank, Hana Bank, Corporate Bank, Shinhan Financial Group, Visa Korea, IBM, City Bank, etc. attended the conference. In addition to online participation, more than 200 people attended the event on-site. Read more: QuarkChain shared its unique DeFi technology at On-Chain Financial Technology Conference in Korea Watch the full video: Qi Zhou’s talk on the On-Chain Financial Technology Conference in Korea: Building Next Generation of DeFi Platform 3.3 QuarkChain Korean community held a quiz contest, and the number of the community doubled For Korea community, we held a quiz event for 5days. The quiz consisted of 10 difficult questions about the latest updates of QuarkChain, and a total of 50 people with perfect scores or most referrals were awarded. Despite the difficult mission, more than 400 people participated, and the number of Korean community members has doubled, which is expected to attract more attention from the Korean community in the future. 4) Upcoming Events 4.1 #3 Bounty program with millions of QKC reward pool will be launched in August In August, #3 round of the QuarkChain bounty program will be held, cooperating with the #1 round of token auctions. Participants will have the opportunity to share millions of QKC reward. 4.2 The first phase of QuarkChain’s DeFi campaign will be live in August. Users can experience the multi-native token issuance auction and transaction. In August, we will launch the first phase of the DeFi campaigns. All the community members are welcomed to participate to create your multi-native tokens through bidding. Regardless of whether you win the bidding or not, all participants will receive rewards. 5) FYI Thanks for reading this report. QuarkChain always appreciates your support and company. Learn more about QuarkChain Website https://www.quarkchain.io Telegram https://t.me/quarkchainio Twitter https://twitter.com/Quark_Chain Medium https://medium.com/quarkchain-official Reddit https://www.reddit.com/r/quarkchainio/ Facebook https://www.facebook.com/quarkchainofficial/ Community https://community.quarkchain.io/
https://medium.com/quarkchain-official/quarkchain-monthly-project-progress-c24a1a1d43c0
[]
2020-09-30 23:22:58.724000+00:00
['Quarkchain Updates', 'Blockchain', 'Weekly', 'Quarkchain', 'Technology']
547
What Makes the Blockchain Special
What Makes the Blockchain Special Blockchain technology could become the new standard for storing and exchanging data in the online and real world. Bitcoin is only the crowning of the digital revolution. Special Properties of Blockchain Immutable Once entered, no one can alter its records. Distributed Anyone can participate adding and verifying its records. Decentralized No single party controls its input or output. Verifiable Anyone can verify directly its complete history. But, how can the blockchain be used in the real world? Especially Special Use Cases Energy Industry In the energy industry, the Blockchain-based company Essentia is developing a test project that will help energy suppliers track the distribution of their resources in real-time while maintaining data confidentiality. Another example of blockchain proving its worth in the energy industry is Chile’s National Energy Commission, they have started using blockchain technology for certifying data of the country’s energy usage as it seeks to update its electrical infrastructure. Real Estate of Block-tech In real estate, the Blockchain is now being used to record ownership and complete property deals, the first of which was conducted in Kiev by startup company Propy. Land registry titles are now being stored on the blockchain in Georgia in a project developed by the National Agency of Public Registry. Enterprise Ethereum’s blockchain can be utilized as a cloud-based storage, courtesy of Microsoft Azure. Google is building a Blockchain which will be integrated into its cloud-based services enabling businesses to store data on it and to request an individual white label version developed by Alphabet Inc. IBM and Walmart have partnered in China to create a blockchain project the will monitor food safety. IBM is using the Hyperedger Fabric blockchain in China to monitor carbon offset trading. Environment The protection of endangered species is being facilitated via a blockchain project the records the activities of the world’s rarest creatures. Blockchain technology has been used to provide a transparent record of caught fish, ensures they are legally captured. Expression Blockchain technology has the potential to prevent censorship and increase transparency as Civil is proving. By storing certificates of authenticity on the blockchain, it’s possible to dramatically reduce art forgeries, as several blockchain project is proving. Arbit is a blockchain-based project led by former Guns N Roses drummer Matt Sorum seeking a fairer way to reward musicians for their creative efforts.
https://medium.com/swlh/how-blockchain-technology-is-changing-the-world-a6f373bcbb5c
['David Mcneal']
2020-12-06 12:40:07+00:00
['Distributed Ledgers', 'Blockchain Technology', 'Blockchain Startup', 'Blockchain', 'Future Technology']
548
Salesforce Work.com Users Email List | Work.com Database
Salesforce Work.com Users Email List — TechDataPark Work.com is a presentation the board stage that enables representatives and supervisors to perform better through training, unknown ongoing criticism together with acknowledgment. It is an exceptionally flexible device and assists associations with upgrading the yield of their workforce notwithstanding improving proficiency. The Salesforce Work.com Users Email List from TechDataPark is a far reaching mailing rundown of clients of Work.com. Moreover, this rundown incorporates the absolute biggest and most renowned organizations in the World. Moreover, the Salesforce.com Work.com Users Mailing List offers an immense database of planned prompts advance other significant items. Notwithstanding encouraging better lead changes, it gives publicists a knowledge into the fame of this item. Based on this, merchants can all the more likely choose what to elevate to which lead for better input. Considering current market situations, the Salesforce Work.com Users email address list is a device for growing your client base. Why use Salesforce Work.com Users Email Address List for marketing? While advancing a specialty item or an assistance, it generally assists with thinking about industry inclinations just as necessities. In addition, email records, for example, this empower advertisers to straightforwardly compare with the imminent leads. With its total data about the leads, it permits sponsors to direct a multi-channel advertising drive. ADVANTAGES OF AVAILING Salesforce Work.com Users Email List • Run a customer-centric campaign • Generate qualified leads • Reach out through multiple channels • Verified and validated database • Customized mailing lists • Assures high ROI • Guaranteed deliverability • Available at affordable prices View on : Salesforce Work.com Users Email List To generate better revenue and higher your business, we help you to connect with the top technology website lists. To know more about the features Email us at: [email protected] Contact us at: +1(669)293–6007
https://medium.com/@joanpeter144/salesforce-work-com-users-email-list-work-com-database-7a2e58431385
['Joan Peter']
2020-02-20 09:32:49.508000+00:00
['Salesforce Work Com List', 'Salesforce Work Com', 'B2b Marketing', 'Email Marketing', 'Technology']
549
4 Important Reasons to Answer Others’ Programming Questions Online
3. A Good Reputation Can Help You With Marketing Of course, when you are a startup, you want to make money and promote your product. Helping others in areas in which you are very well versed and perhaps one of the best in the world will clearly push you forward. Becoming an expert on different platforms, helping others with your answers, and showing the depth of your knowledge can obviously improve your chances of getting new clients. Photo by Merakist on Unsplash. The CEOs of most tech startups are also people who are tech-savvy and often visit such platforms — or at least their head developers might share some solutions with them from such platforms. If they notice that you understand these issues much better than them, they can turn to your startup for help and become your client. As such, this path is also very important and can be useful.
https://medium.com/better-programming/4-important-reasons-to-answer-others-programming-questions-online-5751e84f60fa
['Elvina Sh']
2020-10-05 20:46:49.882000+00:00
['Programming', 'Startup', 'Learning', 'Technology', 'Education']
550
Technical Update 20
Fantom has continued to develop additional functionality and security to the network; for example, by extending our work to make Lachesis ABCI-compatible. This will allow Lachesis to have Tendermint-like functionality with the ability to plug into different services, thereby offering Lachesis as Consensus-as-a-service. Go-Lachesis Upgraded nodes in public testnet to v0.6.0-rc.2. Expanded app.blockContext to call App.DeliverTxs() on each transaction separately: https://github.com/Fantom-foundation/go-lachesis/pull/456 Removed the dependency of evmcore.NewStateProcessor() on stateReader from gossip: https://github.com/Fantom-foundation/go-lachesis/pull/470 Completed the first stage of performance tweaking in preparation of launching a test environment to measure performance. The test environment will consist of the following modifications: R1: Users can submit transactions into any node’s transaction pool. R2: A node can pick txns from its own pool and generate new event block. R3: A node can send its own event block to any peer node of the network. R4: Removed the restrictions of event emission rate. R7: Disabled the fork mechanics. R9: Removed free gas parents (we won’t need gas and fees in a testing environment, as we are measuring tx performance). All changes are merged to develop — https://github.com/Fantom-foundation/lachesis-ex/tree/develop. Implemented logic to check the node for latest Lachesis version: https://github.com/Fantom-foundation/go-lachesis/pull/471 Improved test coverage to 82.5% for go-lachesis/ethapi package: https://github.com/Fantom-foundation/go-lachesis/pull/455 Created new Private txpool: Added new functionality to the existing evmcore. TxPool is a sheet, which has marks that this transaction is private. We need to not send marked transactions to gossip.ProtocolManager Duplicated all methods of sending transactions by methods with the addition of this mark (ethapi and the console) Make sure that emitter receives private transactions without timeout https://github.com/Fantom-foundation/go-lachesis/pull/472 Upgraded the public testnet (with 3 nodes) to v0.6.0-rc2 from v0.6.0-rc1, and upgraded the SFC as well New Fantom Wallet Our new official Fantom Wallet, which we’re building as a Progressive Web Application (PWA), is nearing completion. It will have built-in Ledger support. The application with be downloadable as a desktop or mobile wallet, and will also be available as a web wallet. The repositories related to the PWA wallet are: Removed dependency on Web3js provider and default connection to Val1 full node, which should remove a bottleneck causing wallets to lag in times of high load: https://github.com/Fantom-foundation/fantom-metamask/commit/bc990d1ae9b143f710ec0c351f9b43ec09372e9 Created new UI connected to Metamask and GraphQL, with commits such as the following: Modified GraphQL API to be compatible with new PWA wallet: Completed Ledger App development. Currently in the process of connecting it to the PWA wallet. Research
https://medium.com/fantomfoundation/technical-update-20-27778c32333e
['Michael Kong', 'Fantom Foundation']
2020-04-24 11:58:42.915000+00:00
['Blockchain', 'Blockchain Technology', 'Cryptocurrency', 'Consensus', 'Crypto']
551
Is Vyper a good alternative to Solidity?
Let’s see if Vyper is a viable alternative to Solidity and how much it’s being used these days. What is Solidity? Solidity is an object-oriented programming language for writing smart contracts. It is used for implementing smart contracts on Ethereum and other EVM compatible blockchains( such as Ethereum Classic and Quorum ). If you already have some knowledge of JavaScript, Solidity’s syntax will seem simpler than Vyper’s (which resembles Python). Right now, Solidity main’s advantage is its widespread use amongst Ethereum developers and the big amount of beginner resources there are lots of tools available to make your life easier like Truffle and OpenZeppelin. Also, Solidity courses are available online and some of them are even free. Have a look at CryptoZombies. What is Vyper ? What is Vyper? Vyper is a general-purpose, experimental programming language that compiles down to EVM (Ethereum Virtual Machine) bytecode, as does Solidity. However, Vyper is designed to massively simplify the process in order to create easier-to-understand Smart Contracts that are more transparent for all parties involved, and have fewer points of entry for an attack. Vyper is a newer language, and its developers learned from Solidity’s issues. They designed it to be more simple, secure, and easier to audit. As such, Vyper lacks modifiers, class inheritance, recursive calls, and a few other features that proved to be problematic in Solidity. Also, it adds overflow checking and strong typing. What is the most used, Solidity or Vyper? To compare which of them are most used, I used the GitHub search function to find how many contracts are written in Vyper and Solidity. What I have done is basically to search all files that have a file extension which ends either by .vy or by .sol since those are the respective file extensions of Vyper and Solidity. This search had a shocking result, as you can see by yourself: Vyper Vyper contracts Solidity Solidity contracts At the time of writing only 160 contracts are written in Vyper against a whooping 311k contracts which are written in Solidity. The reason for a low amount of contracts in Vyper is because it’s still an experimental language and even though it’s going to be used for ETH2.0 the community and resources aren’t there for beginners and the documentation is not as huge as the Solidity one. Is Vyper Turing complete as Solidity? First of all, let's head to what Turing complete means. A Turing Complete system means a system in which a program can be written that will find an answer (although with no guarantees regarding runtime or memory). The Vyper language is NOT Turing complete, Solidity is. At the same time, a program written in Vyper will always have a predictable output. A program written in Solidity will not have a predictable output until and unless it is deployed and executed. Vyper is intentionally not Turing-complete for security and overflow checking reasons which are one of the biggest flaws in Solidity. So why should I switch to Vyper if the majority of developers use Solidity? Even though most of the contracts are written in Solidity as I’ve outlined before, there are a lot of bonus points for Vyper and it’s doing a great job improving the security of Smart Contracts. Striving for simplicity Vyper has eliminated class inheritance, function overloading, operator overloading, and recursion as none of these are technically necessary to create a Turing-complete language. Also eliminated as unnecessary are the less common constructs; modifiers, inline assembly and binary fixed point. Things to be aware of when coming from Solidity Since the documentation is not as big as the Solidity one which has been revised a lot of times, I recommend using this section of the documentation if you don’t find what you are searching for. These are some of the things that developers who want to learn Vyper and already know solidity should be aware of: address(this) -> self -> address(0) -> ZERO_ADDRESS -> No reference variables in Vyper require -> assert -> assert -> assert .. UNREACHABLE Is Vyper going to replace Solidity? Vyper was NOT created to replace Solidity. It was created to be used alongside since it shares the same bytecode in order to boost security. A recent study found more than 3000 vulnerable contracts (written in Solidity)contain security flaws. Vyper was created to be as similar to Python as possible, but is not yet a start to finish replacement for either Python or Solidity, but rather a language to use when they need for the highest level of security is required. Smart Contracts holding patient health metadata for instance. Thoughts? Questions? Please drop a comment below. If you’ve found this post helpful, please leave a Clap. Hope you are smarter and you learned new things! Thanks for reading 👐
https://medium.com/coinmonks/is-vyper-a-good-alternative-to-solidity-ac89c33a1d43
['Rachid Moulakhnif']
2020-08-26 11:43:32.634000+00:00
['Blockchain Technology', 'Solidity', 'Ethereum', 'Vyper', 'Blockchain']
552
How Should We Prepare to Refactor Our Code?
Photo by Mimi Thian on Unsplash Programs are always changing as new requirements come in. Unless we replace it with something completely new, the code is always going to be worked on. In this article, we’ll look at how to we prepare to refactor our code so that we can rework our code to make maintenance easier. When Should We Refactor? Refactoring is something that we may have to do when we have to change code so that we can meet new requirements. If things can’t be merged together or anything that looks wrong, then we’ve to change them to make them right. Duplication is something that we should eliminate in our code. Is anything violates the DRY principle, then clean those up. Nonorthogonal code should also be changed so that they’re orthogonal. Having code that doesn’t break other parts when changing them is critical. Any outdated code should also be updated. If there’s dead code, then remove them. Code that meets outdated requirements also has to be updated to meet new requirements. If the performance requirements aren’t met anymore, then we have to move the functionality from one area of a system to another to improve performance. Real-World Complications We may not have time to refactor now. But if we don’t do it now, then we’re just compounding the problems with our code and we’ll run into problems later. We got to explain this to stakeholders so that we’re given the time to refactor our code. If we wait longer, the problems that arise from not refactoring will get worse. Refactor Early, Refactor Often Therefore, we should refactor whenever we need to do it. Refactoring is all about redesign our existing code. Anything that the team designed can be redesigned in light of new facts, deeper understandings, changing requirements, and anything else that arises. It’s something that needs to be undertaken slowly, deliberately and carefully. We shouldn’t try to refactor and add functionality at the same time. Good tests should be in place before refactoring so that we can make sure that our refactoring didn’t break anything. Some IDEs can do simple refactoring for us automatically like cleaning up unused imports, so we may want to take advantage of that. Also, we should take small, deliberate steps when we’re refactoring. For instance, we move one field from one class to another. Then we test and do the next step. Code That’s Easy to Test Code that’s easy to test will make our lives a lot easier. They need to be tested before they’re released to the public. We can make our code easy to test by writing code that are easy to write unit test for. A unit test will establish some kind of artificial environment for us to test our code in there. It checks that our code returns the results that we expect by calling it with a given input. Then we can assemble all of those unit tests together and run them as a suite. Photo by Stefan Steinbauer on Unsplash Testing Against Contract We’ve to write test cases to make sure that a given unit honors its contract. It tells us whether the code meet the contact and whether the contract is what we think it means. We want to test modules that deliver the functionality it promises by checking against a wide range of test cases and boundary conditions. Therefore, to test any function comprehensively, we’ve to test them against a few cases. We can pass in some invalid arguments and ensure that it’s rejected. Also, we’ve to pass in some boundary value and ensure that the result is correct. Then we pass in some valid arguments and ensures that the result is still correct. However, most modules depend on other modules. This means that we have to do more work to write our tests. Our tests would test the submodules that the current module depends on first, and then test the behavior of the module that we’re trying to test. With comprehensive test coverage, we avoid lots of issues that we’ll miss if we don’t have enough tests. Conclusion We should write testable code so that we can run tests to make sure that we didn’t break anything when we’re refactoring. Refactoring is important since requirements change so that we’ve to change our code to make sure that everything still works after we changed them.
https://medium.com/dev-genius/how-should-we-refactor-our-code-afca80b28714
['John Au-Yeung']
2020-06-19 15:49:20.902000+00:00
['Technology', 'Software Development', 'Productivity', 'Programming', 'Web Development']
553
Chris Neill moderates the Panel at BFC 2018 NA
Chris Neill moderates the Panel at BFC 2018 NA Chris Neill moderates the Panel on the Role of DLT in Post Trade Processing at BFC 2018 NA Chris Neill moderates the Panel at BFC 2018 NA Chris Neill, Head of American Development, XinFin moderated the panel discussion on the role of Distributed Technology (DLT) in Post Trade Processing at Blockchain For Finance Conference, North America recently held in Boston. Chris Neill moderating the Panel on the Role of DLT in Post Trade Processing at BFC 2018 NA Topics covered in panel discussion: Inefficiencies in the current tech with post trade processing and the DLT promise Time: 4:30–6:59 minutes The duplication of steps is happening across eco-chains as banks, clients, infrastructure providers, CCPs and regulators are doing the same thing. One of the reasons for duplication is the disagreement on the processing steps, while at the same time, everyone trying to process steps within their respective operations while pushing regulators to the end of the chain due to a reporting perspective. It’s not that technology is inefficient, but people haven’t realized till now that reinventing the wheel internally and trying to reconcile every single step all the way through the life of the trade is not benefiting anyone. Elimination and reduction of the reconciliation and standardization to develop common workflows, processes and data sharing is all possible with blockchain but with programming of workflows higher up the chain, one can synchronize the actual timings which is the most important thing for reconciliation. Distributed ledger technology makes it possible. Achieving T2, T1 or T0 Settlement times in post trade processing with inefficiencies involved Time: 7:00–13:09 minutes Staying away from hype and breaking down every business problem one wants to solve and then evaluating each one in isolation and then bringing all problem statements together is the right way to do it. The integration of the DLT component on the network allows you to address more of these problems in longer term. Market convention that everyone agrees to is an important part. So, how do we apply DLT in post trade in 2018–19 is important. Rather than most of the people taking the exact workflows of the processes and just mapping it to blockchain and waiting for some magic to happen, they should map the existing processes onto a technology that allows them to do things in a much more peer-to-peer fashion. The good thing is the instant recognition of DLT in capital market by major players who got confidence of not only removing the duplication but also truly digitizing the processes by fundamentally changing workflows and doing things differently. Breaking down and understanding the workflows, questioning the very existence of workflows, knowing what actually needs to be done and then making those workflows fundamentally more efficient by reducing middle and back office cost is required. Rather than automating the exceptional workflow, the approach should be to eliminate exceptions in first place. Re-architecting the whole platform, and re-thinking the workflow will help architect the digitization using DLT and Blockchain. Is the technology ready? How are the lack of standards constraining the progress and why there is a disagreement for standardization? Time: 13:10–26:39 minutes Immediate feedback on standards, its applications and questions is important because the standards are evolving and applications of the standards in terms of DLTs and enterprise platforms are being tested. Knowing what the standard is attempting to do is very important. Take each processing step (say there are 150 steps…) and translate them into 12–15 core state transitions and then throw a set of inputs to these state transitions to run some validation roles that guarantee you a deterministic output. Putting a standard on a DLT platform, an enterprise platform and on market utility provider in post trade processing guarantees the standardization of workflow across different vendor platforms. Fragmentation of solutions and the amount of money spent in reconciling across front back solutions is massive. Whereas, standardization promotes the commercial aspect within the value chain while preserving compatibility across those platforms. Amidst so many DLT providers, utility providers, it’s important how infrastructure providers pick these standards and the critical mass developed around these standards and what are we getting from technology providers in facilitating that. The challenge is to figure out the right timing to standardize and at what level it will happen is also very crucial. Domain model and sticking to standardization will help in advancing the entire industry and building the enterprise blockchain space. The goal should be to ensure that everything from processing, storage to execution is standardized and on top of that; DLT abstraction is built which enables to share and create distributed state machines. The goal should not be to create things that everyone struggles to adopt but something that is for easier adoption and enables state machines to do tasks themselves through standardization and DLT abstractions. How do you translate down to the lower level details of DLT abstractions is also very important? The cryptographic hash allows you to standardize because it doesn’t matter how you store the data. The same hash means you have the same object which can be same state, same trade information etc. While the tech is ready for smaller use cases, it is important to know for what all use cases of specific scale the tech is ready. Volumes are a critical component. The values tend to be highly correlated to the actual scale of a process. From the performance aspect tech is ready but for larger use cases still it is evolving. Scale component has many pieces and all the different users and different nodes that create the system would demand business intelligence, operational requirements and compliance reports so the system has to do all this securely supporting all models needed for other systems to run in agreement. Industry is implementing enterprise grade solutions that connect systems including risk, compliance, reporting, regular operational users all at the same time becoming a part of market infrastructure. The challenge for the whole enterprise blockchain is how they retain the benefits of a blockchain system i.e. the shared truth across market/industry securing privacy and scalability. Any approach taken for enterprise blockchain should be different which you can scale horizontally. Relationship of DLT and digital assets with enterprises/exchanges, the adoption of DLT by enterprises for post trade processing and exchanges. Time: 26:30–36:20 minutes There are certain use cases that fit the capability of the risk tolerance of the tech and regulators in the market. However, these platforms need to be tested for resiliency, privacy and security and how these systems solve the performance problem, what are the architectural designs, the decisions and the compromises of the same. So, knowing about use cases of higher scale should be the next step. It’s important to know what the use cases are trying to solve, are they localized, are efficiencies localized, is network being affected from those use cases only then there is process optimization and efficiency gain. Not everybody is willing to go through the journey of blockchain in post trade so figuring out the transition stage where enterprise systems can co-exist with blockchain, DLT and smart contracts will result into co-existence of legacy platform with new platform. So, no rip and replace play is going to happen for institutions participating in the ecosystem. Participants expect the DLT platforms to facilitate that transition but each platform behaves in a different way which makes difficult for a commercial enterprise of value to pick one ring to rule it all. If interoperability is not there, then what are the other options for DLT platforms, how do they all work together is a big technological challenge before broad base adoption of these platforms. Institutions should get efficiency gains from the DLT technology, the multiple internal systems, multiple legal entities, multiple reporting entities, multiple entities that simply cannot have the same data because of privacy/EU should work together. Enterprise have built this network or abstraction across their internal system which industry thinks that it’s not a DLT as the entire network is not brought together but the reality is digital transformation has already started with DLT data model and an effort is their to connect with other network participants. Interoperability is very important. Learning the tech, ensuring all applications really fit the technology, standardizing tools and trying and applying for use case that are right. Extracting data from systems, transforming it, loading it onto systems should be done using standard models. Rather than trying and inventing new languages, the effort should be to make it straightforward and simple so that code can be automatically generated for people. Higher level DLT abstraction instead of lower level details as latter can be provided as library on which upper level applications can be built, hence promoting interoperability and faster adoption of DLT. The DLT adoption has already started by exchanges. Market infrastructure will play a key role in setting up entire blockchain space. All are coming together for the cost reduction i.e. revenues where lies the most excitement. A move has already started to tokenize all the assets being traded in some of the exchanges and balance sheet optimization. Platforms are being tested, banks coming in to play the cost part so that there is mutualized cost, market infrastructure & exchanges anticipating high gains so there is exhilarating adoption of DLT. DLT and it’s affect on actual end user. Time: 36:21–47:06 minutes The real benefit for end users is actually lower cost whereas the fact is the end consumers pay a high cost in the kind of complex infrastructure we have. The big users of commodities in futures market approach SMEs in want of hedge and purchase futures against commodity pricing totally ignorant of the commodity next year and lock down cancel prices. Given the inefficiencies in post trade processing, companies are paying higher cost for futures trade and finally the customers. Costs are equally born by big financial companies and banks which is eventually coming down to us as fees and costly products. DLT can inject this efficiency in all these systems which can translate to more efficient use of capital in the broader economy. So, end users will be benefitted. The approach should be how do I maximize my investment in technology and operations so that business can be run efficiently and capital can be deployed on a revenue generating product. Looking at the entire infrastructure stats, businesses find a benefit in fixing their own shop first by adopting DLT platform. But what happens next? What happens next is dependent on some critical mass, development in the industry and coming together to build one use case which can be solved across a core set of like-minded participants with technology partners, with infrastructure providers demonstrating material benefits. No one would want to maintain legacy platforms as well as new technology in one place as it is very expensive. Once we solve the performance throughput kind of questions, it becomes more tangible for businesses in learning how do I create and deploy in internal stack, how do I run legacy and the new technology in parallel and how much will it cost. The co-existence of distributed/decentralized technology and market utilities together poses some questions. Censorship resistance and dealing with trust with unknown parties in not a requirement actually. The challenge is to find out to the most efficient way to solve these problems. Utility brings efficiency through some elements of centralization for processing certain components that are not required to be distributed, for governing network/participants that are equally represented, for dealing with participants that may never have to the cost benefit of adopting a node and being an intimate integral part of the network. The disintermediation of everything for the sake of it should be replaced with effective and efficient solutions for the problem. The ability to move peer to peer can be move to more and more levels, stack governance and more. Peer to peer technology is architecting correctly and now there is a need to think of new ways to marketize and access capital. ICOs are a perfect example of beginnings of a more direct inter-marketized capital market. Buy and sell side seem to be fading and market participants, banks and SMEs all trying to figure out how they can fit together. So, more like-minded peer to peer transactions is required with each other on a cross-border basis. There is a vast class of applications today which need some type of intermediary today but sometimes these processes that involve the intermediary aren’t as efficient. Making components P2P can add value however utilities also play a key role. The application built today should bring all p2p components together with the right enterprise, the right execution model and the right components together. If DLT makes sense use it otherwise look for other alternatives. When there are multiple participants, multiple systems that can connect there is a good need for taking for some of the common ideas, building some standardizing across them and then using the execution model that comes out of original P2P learning and the DLT model. Is there going to be a time for fully decentralized operations? Do we see this adoption happening i.e. a decentralized transition strategy? Time: 47:07–52:20 minutes No technology can just come and say that we are going to disintermediate the present systems for particular utility as we need some sort of a third party to step in by regulatory mandate. They provide risk management feature which is a regulatory mandate so these would remain until there is a new regulatory mandate and process optimization. The processes are commercially non-differentiated and there is no secret sauce here. In this process of defining standards and finding use cases where businesses can work towards efficiencies changing some of the structural elements of the markets no one will oppose that change. The change has to come little bit organically across the industry participants. It’s not really for the regulators to mandate this, it is an industry ecosystem, participants have to figure this out for themselves. So, the question is not P2P eliminating central authorities but about the applications. No currency can be standardized across banks and in terms of tokenization also, a cryptocurrency is involved which will be issued by a group of people and then these tokens will be traded amongst the network participants. So, full decentralized will take time and we don’t see elimination of banks and more happening in near future. The disintermediation ideas are not bad but businesses don’t see anything radical happening in 2–3 years. It might happen down the line in 5 years or more. What businesses can see is market participants actually adopting and trying to become very friendly to this new idea of digitization and new way of thinking (DLT) which makes their operations efficient and eventually translate into lower cost for everyone. XinFin openly invites FinTechs, SME’s, banks, financial institutions, fund & asset managers to come & join us. XinFin already has a Sandbox environment & are licensed to do live transactions. XinFin has live API’s and would like to invite you to build standardization around it & gradually roll them out into live production. XinFin has approached several regulators around the world which establishes credibility in many different jurisdictions. If you are looking for consortium membership, standardization or want to build around regulators, XinFin can provide a platform you. Visit: www.tradefinex.org Visit: www.xinfin.org or email us at [email protected]
https://medium.com/xinfin/chris-neill-moderates-the-panel-at-bfc-2018-na-b7feeb0925ab
['Xinfin Xdc Hybrid Blockchain Network']
2018-12-21 13:17:38.859000+00:00
['Latest News', 'Distributed Ledgers', 'Blockchain', 'Blockchain Technology', 'Technology']
554
Clean code — Name matters. A good naming convention is one of the…
1. Give your names intentions Name of class, function, variable should inform why was created, what do and how is used. If you need to add a comment above your variable you probably didn’t precise your intentions enough. Choosing the right name is a good investment so take your time. Asking your colleagues for help is a good idea. Remember you are not the only one who will be working with this code. Bad: public int $f_cnt; // friends count Good: public int $friendsCount; Variable name f_cnt don’t inform us strictly what kind of information keeps inside. We have to guess or dig deeper into code to find the information we are looking for. Bad: public function process(array $u, $a = 18): array { $results = []; foreach ($u as $i) { if ($i['a'] >= $a) { $results[] = $i['id']; } } return $results; } As you see this code isn’t complicated. It has only one array iteration and one conditional statement but can you deduce what this code does? The problem here is not code complexity but its secretiveness. Names of variables, array keys, and function properties are hiding important information from us. Let’s see at refactored code. Good: public function getAdultUsersIds(array $usersList, $adultAge = 18): array { $adultUsersIds = []; foreach ($usersList as $user) { if ($user['age'] >= $adultAge) { $adultUsersIds[] = $user['id']; } } return $adultUsersIds; } Better immediately. We can now deduce what method does just by reading the name.
https://medium.com/marcin-worwa/clean-code-name-matters-6368f8e385bd
['Marcin Worwa']
2020-12-09 19:39:27.149000+00:00
['Clean Code', 'Software Development', 'Technology', 'PHP', 'Programming']
555
A Proposal to Establish the U.S. Digital Progress Administration
President-elect Joe Biden should create the digital equivalent of FDR’s Works Progress Administration to build durable continuously improving public digital assets, restore faith in government, and most importantly, to protect our democracy from the steady encroachment of fascist authoritarianism. Photo by Markus Spiske from Pexels On his first day in office, President Joseph R. Biden, Jr. should create the U.S. Digital Progress Administration (DPA) and elevate it to a crisis intervention role similar to FDR’s Works Project Administration (WPA). The overall goal of this new DPA: build durable public digital assets that more affordably meet public needs and continuously improve over time. Acting on this recommendation would not only help our citizens believe in our government again, it would also put a halt to the waste of billions of dollars in public funds that are paid annually to a handful of Big Tech firms whose lucrative but unethical business practices would have made many 19th century robber barons blush. We can make far better use of all that wasted cash. Public uses, similar to how FDR’s WPA built much of the pre-computer age physical infrastructure Americans still enjoy today. Public domain image There is no other strategy that would do more to stem the rising authoritarian impulses that have shocked our system in recent years. By making dramatic tech-based improvements to government services that are now easily within reach, by using taxpayer funds more wisely, we can do a lot to rebuild confidence in our government. That is important. It is the current widespread absence of such confidence, a shared faith that our government is working well for all of us, that forms the fertile ground in which alarming fascist and authoritarian impulses have already taken root. The urgency of this recommendation is based on several factors not the least of which is this: the next would-be tyrant to rise up in our dystopian national reality is likely to be much smarter than the moronic Mr. Trump. And that person, we now know, could rather easily bring our entire still-young American experiment in self-government to its end. A government that consistently fails to deliver the goods its citizens pay for will not maintain its citizens’ support forever. That clock is already ticking. Photo by George Becker from Pexels I make this recommendation as a former federal official, a presidential appointee in the Obama administration and, more importantly, as a veteran Silicon Valley journalist. Because it was during my journalism days when I learned something very important that our government policy makers must now urgently comprehend. I shall explain. Image by Pixabay During decades covering Silicon Valley (for CNBC, Forbes, Barrons, the San Francisco Chronicle, helping create public radio’s Marketplace program, etc.) I was often called upon to write stories about tech firms, tech stocks, and start ups. Not surprisingly, my editors and readers always wanted to know which tech start ups would succeed, which tech stocks were good investments, and what strategies separated high tech’s business winners from losers. The advent of the Internet in the late 1990’s, which I covered, had given birth to an entirely different economy festooned with new firms, new technologies, and new business models. A lot of it seemed mysterious at the time, with wave after wave of new technologies crashing into markets looking less like products and more like magic. We had the initial dot com boom, followed by the dot com crash, and then the subsequent slow but steady growth of the oligopoly of Big Tech firms that now increasingly dominate our economy and, more recently, influence our politics. Based in Palo Alto, I wrote hundreds of articles during this period, learning along the way with my readers as new business realities emerged. No one was an expert at first because nearly everything happening was so new. We were all learning in real time. But there was one article I wrote early on, for CNBC.com, that became a sort of Rosetta Stone for me thanks to Laird Foshay, an early successful Internet entrepreneur. While researching the story, I had gathered Foshay’s insights into what investors should look for and why the venture capital world was so fired up at the time. Foshay, now a gentleman rancher, focused my attention on how the creation and sale of digital goods would have a world-changing impact on virtually every market over time. The central thesis of that 1999 CNBC article still rings true, even louder, today, not only for business but also for government at all levels, with financial implications that have compounded over time. Over two decades, hundreds of billions, perhaps a trillion or more, dollars have been spent, and in many cases, wasted as funds taxpayers had previously invested in durable goods that lasted for generations got diverted toward purchases of non-durable digital wares that primarily benefit their sellers. These newer digital age government purchasing habits and practices, which see taxpayers increasingly renting software to meet public needs instead of acquiring it, are slowly but steadily impoverishing our increasingly decimated public sector. Renters don’t build equity. American taxpayers have shelled out billions renting software-based technologies while building zero digital equity. What other tenant would make such a deal? In the process, our public sector is also denied the primary benefit, better government services at far lower costs, that new digital technologies would otherwise enable. Instead, when it comes to the delivery of government services the financial savings created by digital modalities are being extracted from the public sector to meet the voraciously high profit margin requirements of a handful of Big Tech firms. As an added benefit, the executive compensation and labor management practices of those firms often make significant contributions to income inequality, as well. The solution is for taxpayers to demand better value for their substantial investments in digital technologies. A properly led and focused Digital Progress Administration would make that possible. Here is the nut graph of my 1999 CNBC.com interview with Foshay about tech stocks, that puts a spotlight on the digital progress our government is leaving on the table: “A factor to consider when evaluating an Internet company’s long-term prospects is whether it sells digital or non-digital goods,” Foshay says. “Digital content, which can be manufactured and distributed far less expensively, offers the best chance of high returns. That’s been the secret to Microsoft’s success: huge gross margins. In the future, the [tech] companies that will be most successful are the ones that can use the Internet to not only sell, but also distribute, their products.” So, what does this mean for government? It helps first to understand what it has meant for business. As Foshay accurately observed at the time, the ability to make and distribute digital goods is central to the success of the most profitable high tech firms. In the pre-digital world, the world in which I was born, companies would make physical products one by one, store, ship, sell, and service them, again, all one at a time. If you made tires, for example, you made one tire for every tire you sold. Profit margins were often rather thin, in part, because expenses were so high. Every item sold was also an expense, you needed raw materials, factories, workers, etc. When a customer bought a tire, they got to own the tire, which they could keep or sell if they didn’t need it anymore. Photo by Kateryna Babaieva from Pexels By contrast, in the new digital world Foshay described, a tech firm could make a product once and sell it millions of times with no additional manufacturing costs and, thanks to the Internet, hardly any added distribution costs either. Imagine how profitable a tire company would be if it only had to make a single tire, which it could then sell to everyone who had a vehicle. The efficiency of creating and selling digital goods is the reason leading tech firms have been so wildly successful and why, for those who could afford their stock, they have been such outstanding investments. When you can make a product one time, and sell it millions of times at little or no additional cost, you have a license to print money. This also explains why successful tech firms often spend so much money playing around in other lines of business, developing new “skunk works” pie-in-the-sky projects (self-driving cars, eyeglasses with built in sensors, virtual reality devices, special purpose satellites, you name it). When your core business throws off more money than you know what to do with you can spend it on any idea that might capture your imagination, however fanciful. Good for them, you might say, American capitalism at work. And you’d be right, but only up to a point. Now, think about how our government bought necessary goods before the digital and internet revolutions. If the government needed typewriters, for example, it bought typewriters, which it owned, which it could use until they wore out, or sell to surplus stores when the time came to upgrade to a new machine. Anyone with know how could fix those machines. When it came to buying products that government needs taxpayers money was used mostly to buy durable goods that provided lasting items with tangible value. In many instances, these brick and mortar products allowed government to create vital public services and resources that were noticed and appreciated: public sanitation systems made out of cement and pipes that eliminated excrement in the streets and reduced disease, public roads that facilitated travel and industry, dams that tamed wild rivers and provided affordable electrical power, parks everyone could enjoy, trails, and in the 1950’s, the interstate highway system. Real physical stuff that made everyone’s lives better. Public confidence in government was higher in those days, in part, because our government was steadily improving the quality of our lives in ways that were hard to ignore. In that era, very few voters had reason to embrace the idea that our government should be strangled in the bathtub. Flash forward to our digital age. According to its own official figures, our federal government now spends about $100 billion a year on information technology products and services. The real truth however, is that our federal government presently has no idea, none whatsoever beyond that estimate, how much it really spends on IT. This blind spot is not the result of a lack of effort. Early on during President Obama’s first term I was part of a federal interagency task force organized by the White House Office of Science and Technology Policy that sought to measure actual total annual federal IT spend by agency and category of expense. The leaders of our task force had hoped to identify trends and possible opportunities for efficiencies. We made some limited progress measuring those costs but gave up after a few months after it became apparent we were on a fool’s mission. It turned out that calculating actual total annual federal IT spending was, we discovered, virtually impossible. In many cases, IT costs are bundled into the costs of other items, deeply buried in seemingly unrelated budget line items, or camouflaged in other ways. Only the most obvious expenditures are measurable. As one participant on our task force put it before we gave up, what we wanted to know was “pretty much unknowable.” Our task force quietly dissolved. No results were ever published. I imagine that must have delighted Big Tech. Concealing that data harms taxpayers but certainly serves the interests of IT vendors. The result is taxpayers spend billions each year renting the same digital items over and over again, word processing software, database software, communications, software, cloud storage, and end up with absolutely nothing to show for it in the end. Public investments in durable goods that generate lasting value have been replaced by public investments in non-durable goods that vanish after the last government check clears. Just recently, the federal government announced a new pentagon cloud storage contract totaling about $10 billion dollars. At the end of that contract, our government will have absolutely nothing to show for it, except another bill. If President Eisenhower had done the same thing when he built our interstate highway system, taxpayers would still be paying annual license fees to the contractors who built Route 66, which would not connect (or should we say “interoperate”) with any other roads. Here is something we do know: U.S. government agencies, federal, state, and local, are now the single largest and most lucrative customer for many leading Big Tech firms. Take database vendor Oracle Corporation, for example. According to one recent estimate, Oracle generates roughly 5 percent of sales, or approximately $2 billion per year, from the U.S. federal government. Sales to governments of all types generate as much as 25 percent of Oracle’s total annual revenues each year, according to another estimate. Making sure taxpayers keep paying thru the nose is a key reason many high tech firms enjoy such healthy profit margins. It’s also a measure of something else: how much money taxpayers have lost over the years renting software they could own, share with other government agencies, and instead pay to continuously improve. The taxpayer’s loss is Big Tech’s gain. Job One of a Digital Progress Administration should be to publish an accurate inventory, a federal market survey, of digital technologies that taxpayers could own and continuously improve rather than rent. And then one by one, based on practicality and impact considerations, the federal government should employ new purchasing methods that invite entrepreneurs to bid on contracts that provide those technologies in forms and formats that enable public ownership and continuous improvement. In short, we should build a digital public sector. We should build digital equity for the public sector. Before I describe some of the things a more viable adequately supported digital public sector would make possible let me emphasize one item that is not part of this proposal. I am not proposing the federal government take over the tech industry. Far from it. I think an even more viable, competitive, open, and progress-driven commercial tech industry will evolve in tandem with our federal government becoming a more discerning and sophisticated consumer. Instead, I am talking about building what is essentially a public option for many technologies that are now only available from proprietary sources. In cases where that public technology option is better, it will win public use and displace more extractive business models. In other areas, perhaps including newer uses of technology, more nimble private sector actors may be more likely to prevail. Public roads and private jets both exist at the same time. Technology that is owned by the public and technology that is owned and sold by the private sector can likewise coexist. A commercial tension between the two will be healthy for our economy, lend itself to the delivery of more efficient public services, and also be good for smaller, start up businesses. Image by Pixabay Take, for example, the present controversy over Apple Computer’s app store, which charges application developers an extortionary 30 percent commission on sales, which can only be processed through Apple’s system. Most app firms, which is to say virtually every form of business imaginable, would go out of business if they don’t cough up the cash to Apple. Imagine if that was how business worked in the analog world. Imagine if just one (or two, if you count Google’s android app store) for-profit firms could demand a 30 percent commission on every sale before anything could be sold in any physical store. One chokepoint on all forms of commerce. We’d have a revolution on our hands. And yet, because we are talking about software and technology (its like magic! We don’t understand how it works!) Apple gets away with it. Some hope that enforcement of antitrust rules will solve this problem, and those efforts might help. But it also could become a case of whack-a-mole, where one unfair business practice is replaced by another as smartly run tech firms stay one step ahead of the regulators in much the same way many of those same firms have avoided taxes they would otherwise owe. The solution is to build a practical public alternative. A Public App Store A Digital Progress Administration could oversee the creation of a public app store where app developers could offer their wares to the public over all smart phones for a reasonable annual fee, and where apps would be free to process payments from any vendor they select. This would create the digital equivalent of a public square where everyone can bring their goods to market free from thuggish extortionary demands. In this environment, our markets would move toward venues where freedom was maximized and we all know that freedom, with proper regulations, is good for markets. Here are a few thumbnail descriptions of just some of the other progress a Digital Progress Administration could bring within reach: Shared continuously improving software to more affordably manage public contacts and interactions with government agencies. Why is it so hard, in 2020, to get information about social security issues, military service records, available federal benefits, IRS rules, etc.? Why can’t the state of California, for example, deliver unemployment benefits in a timely way to millions of qualified desperate individuals? Why is it taking California’s DMV months to process a simple change of ownership? Proprietary software vendors, who always seek to “lock in” government contracts in ways that lock out competitors, have no reason to make things simpler, less expensive, or more transparent. Unless we demand those features. Shared continuously improving software that manages parking, traffic patterns and enforcement, and transportation services including so we can more reliably determine when the bus or train will arrive. Ditto with shared public software that can track and monitor climate change inputs and outputs. Shared publicly-owned software that finally enables a real start on the dream of Smart Cities, which has foundered here in the U.S. but which is happening more quickly in many other countries, including in China, primarily because U.S. tech vendors insist on owning Smart City software solutions and charging royalties in perpetuity, rather than selling software solutions that other vendors can service and improve. Software that citizens can use to register their employment status, availability for work, and the amounts they spend on items such as food and housing, to replace the inaccurate statistics on which so many wrongheaded government policies are based. Software that provides candidates for public office a reliable way to present their platforms and pitches to voters without having to pay huge sums to intermediary for-profit media companies. A public social network that enforces basic standards of accuracy, decency, and fairness as an alternative to social networks driven entirely by profit motives (think the social networking equivalent of how public broadcasting lives side-by-side with commercial broadcasting). Something I have been calling for for years: more public investments in the creation and continuous improvement of open educational resources as substitutes for proprietary K-12 and college textbooks, which unnecessarily consume billions of dollars a year in public resources and student financial aid. The government can, today, right now, make free online textbooks available to students at a tiny fraction of their present cost, much of which the public already shoulders. Open educational resource textbook passages can be printed out as needed for just the cost of paper and ink. If I can come up with these ideas in five minutes off the top of my head, imagine the other opportunities a well-run Digital Progress Administration would bring into focus. The key is to focus this new DPA on a very simple question: how can our government use digital technologies to improve the cost-efficient delivery of the goods and services taxpayers need? Over the past few decades government tech policy at the state and federal levels has primarily focused on helping tech companies succeed. Those policies have worked too well. (I explain how the public interest was sacrificed in service of narrow private special interests at the dawn of the digital age in California here). The time has now come for our policy makers to design and develop tech policies that don’t make the rich richer but instead focus on how our government can use digital technologies to help all our citizens, our government agencies at all levels, and our democracy itself, succeed. That is why President-elect Biden should create the Digital Progress Administration. I hope and pray it is not already too late.
https://medium.com/digital-diplomacy/a-proposal-to-establish-the-u-s-digital-progress-administration-72b4be0b1230
['Hal Plotkin']
2020-12-02 02:46:35.324000+00:00
['Economics', 'Public Policy', 'Technology', 'Government', 'Biden']
556
4 Solar Technologies That Live In Your Solar Farm
By Richard Payne, Managing Director, ReNew Petra Capturing energy from the sun is not a new concept. In fact, solar energy has been around since the 7th century B.C. Today, solar energy is the most popular and affordable clean energy option. To create energy, photovoltaic panels absorb the energy from the sun, pass it to an inverter, and the resulting electricity is used to power our homes and businesses. Across the United States, we’ve seen an increase in utility-scale projects, commercial and industrial installations, and residential solar due to falling installation costs. But have you ever wondered what technologies support a solar installation? Here are four technologies that allow us to turn the sun’s rays into electricity. Photovoltaic Cells Within each solar module, also called solar panels, there are a string of photovoltaic cells (PV). PV cells, also known as solar cells, convert the sun’s rays into energy. Typically, a solar panel can capture about 30 percent of the sun’s radiation. 70 percent of the sun’s radiation comes from infrared light, 25 percent comes from UV light, and the remaining five percent comes from visible light. Each individual cell can produce three or four watts of power. Location, tilt, orientation, and several other factors determine how much sun can be absorbed by the solar cells. There are two types of PV cells: Monocrystalline: Monocrystalline PV cells are made from slowly pulling one monocrystalline silicon seed crystal out of melted monocrystalline. Since they are made from one large crystal versus tiny ones they are more efficient to make and they outperform polycrystalline PV panels by 10 percent. Polycrystalline: The second type of PV solar panels is polycrystalline. They are the more commonly known and are less expensive than the monocrystalline panels. Due to the manufacturing process, polycrystalline panels are typically dark blue but have light blue patches. Inverters A vital part of every solar farm are the inverters. Inverters supply the generated power to a grid by converting the direct electricity currents (DC) coming from the solar panels to alternating current (AC). Types of inverters include: Central Inverter: A central inverter is prewired electronic equipment that can be custom made for the demands of large-scale utility solar farms, commercial rooftop solar installations, and grid-tied solar farms. A central box inverter has everything a solar site needs for distribution, automation, security, and monitoring. Since central inverters can withstand almost any climate, customers benefit from the low installation costs and project cycle times. String Inverter: String inverters are essentially smaller versions of central inverters. They are extremely efficient, are much easier to install, and have a lower installation cost per peak watt than the other types of inverters. In the past, string inverters have cost more than central inverters, however, costs are continuing to fall as more large solar farms start using more string inverters for better overall solar farm up time. Micro Inverter: Small but mighty, micro inverters work panel by panel and are user friendly for customers. Micro inverters can be found on residential rooftop installations and operate at a 190 to 220-watt power point. Despite their size, they have created more opportunity to harvest energy on a smaller level. Combiner Boxes Combiner boxes are installed between the PV panels and inverters and offer protection for the systems. The combiner box brings several output solar wires together and have added features integrated into the box such as rapid shutdown, sensors, and monitoring equipment. Combiner boxes cut labor and material costs since they consolidate the wire string output. They offer a reliable, safe solution to maximize the voltage emitting from the system. Racking A racking system is used to safely mount solar panels on different surfaces like roofs or in the ground. They can be seen on top of warehouses, business facades, fields, and other locations. There are various types of racking available depending upon the application. For instance, ballasted racking systems are the best fit for slightly uneven surfaces or surfaces that can’t be penetrated, like a roof. Another option is the ground mount which requires additional engineering for various topography. Ground mounts are used in larger utility solar farms, typically between five and 500 acres.
https://medium.com/@thinkrichardpayne/4-solar-technologies-that-live-in-your-solar-farm-db376ac8b7fb
['Richard Payne']
2019-06-30 13:07:16.649000+00:00
['Solar Energy', 'Renewable Energy', 'Solar Power', 'Technology']
557
Access the world of fine art with Maecenas
Accessing the world of fine art is a dream for many — little wonder when considering some of the sales that have been made this year alone. Many paintings are now being sold at auction for hundreds of millions. One of the latest is Modigliani’s Nu Couché (Sur le Côté Gauche), a 1917 oil painting that was sold at Sotheby’s New York auction house mid-May for an incredible $157.2 million USD. Over at Christie’s and Phillip’s in Hong Kong at the end of May, works from Chinese-French Modern master Zao Wou-Ki dominated the auction scene. At an evening at Christie’s alone, Zao’s work 14.12.59 made HK$176.7m ($22.6m USD), representing the third highest price for the artist at auction. For millions who want to join the world of fine art investment, taking part in these auctions is an unrealistic ambition. However, this could potentially no longer be the case — all thanks to blockchain technology. Blockchain makes it possible to participate in fine art ownership Fine art ownership isn’t only a world for those who can splash the cash. Many also explore the possibilities of an art investment to preserve their wealth and grow it in an alternative market that is less volatile than many others. There are schools of thought within the industry, though, that the nine-figure sales sum is becoming ever more common. While this may be music to the ears of some investors, and auction houses who make large commissions on items sold, it also serves to underline how closed the art investment market is to so many worldwide. Very few individuals can compete with investors and art funds with hundreds of millions — even billions — at their disposal to invest in the world’s most famous pieces. Maecenas ultimate goal is to disrupt that market. Enter the incredible world of art with Maecenas Disruption can mean a lot of things. From our perspective, we’re building Maecenas to diversify the market and open up a huge new array of options for individuals to have access to art, presenting them with ways to enter this world outside the traditional avenues. For centuries the art market has stayed relatively unchanged and unchallenged. The Maecenas platform has been built to decentralise and disrupt this, using blockchain technology to offer an alternative to the auction houses and allow investors across the planet to buy shares in some of the world’s most well-known and famous works of art. Maecenas is designed to create an art market without intermediaries, to make information more accessible to potential investors, to open new avenues of capital generation for galleries and collectors, to reduce overall costs, increase liquidity and — essentially — open the global art market up to everybody. Find out more about how the Maecenas platform is set to decentralise and democratise the art world with blockchain technology by reading our briefing documents.
https://medium.com/maecenas/access-the-world-of-fine-art-with-maecenas-f99313ba2148
[]
2018-08-01 02:51:23.710000+00:00
['Blockchain', 'Cryptocurrency', 'Investment', 'Technology', 'Art']
558
Fidelity driving into Crypto Lane — Top 5 Implications — Cautiously Bullish
This week has started on a high for the cryptocurrency Industry as Fidelity is all set to launch a Crypto trading platform. It is perhaps the biggest news in recent times that can surely take the whole crypto industry to greater heights. The company will be officially called as Fidelity Digital Assets that will provide an enterprise-grade custody solution, a cryptocurrency trading platform and consulting services 24 hours a day all through the week. It is specially designed to align with blockchain’s always-on trading cycle. Let’s consider the broader implications of this mega news for the crypto industry itself: For those who are not aware, Fidelity Investments is the fifth largest asset manager in the world providing financial services worth $7.2 trillion across the globe. It includes customer assets, providing clearing, custody and investment services for more than 13,000 institutional advisory firms and brokers. Now, it is finally entering into the cryptocurrency market which means that there is a demand for cryptocurrency assets. The giant didn’t just wake up overnight. They were planning and planning and planning and carefully crafted and launched. It will present cryptocurrency as a serious investment option for investors like Family offices and Hedge funds. They have been shying away so far, largely due to the Custodian issues surrounding digital assets. But now, with a giant asset management monster protecting your hard-earned digital assets, you can sleep safely (no pun intended). This paves way for 2019 to be the year of Wall Street driving their lambos to Crypto Lane. This, coupled with an investment of 16 million dollars in BitGo by Goldman Sachs are signs of better things to come. Not only the market will witness huge liquidity we should also see Bitcoin touch the 15k dollar mark by June next year. Other sleeping or dreaming or partially daydreaming giants will wake up to this. The boards of these giants will not let them sleep anymore. Have to do SOMETHING. Need to make some noise. It’s all bullish. However, the continuing issues of Tether, wash trades, whales manipulation, Bitmex overleverage etc. are continuously dampening the spirits. So trade with caution. Also, for the first time in the history of Bitcoin — 2008 onwards, this is the first traditional stock and bond crisis brewing up. As of now, I am undecided as to how this will affect the crypto markets. Early signs are that there is pressure but the market is so tiny and nascent that there shouldn’t be many spills over. I am yet to decide and hence I remain cautiously bullish.
https://medium.com/sankalp-shangari/fidelity-driving-into-crypto-lane-top-5-implications-cautiously-bullish-9d9c596f43f0
['Sankalp Shangari']
2018-10-24 09:48:37.716000+00:00
['News', 'Blockchain', 'Bitcoin', 'Technology']
559
Congrats Jax! AirTree’s Newest Principal
One of the best things about being a VC is seeing the founders you’ve backed start to hit their straps and really show the world what they’ve got. But there’s one thing that even tops that — when you see your own team members do the same. That’s why we’re absolutely thrilled to announce that one of our very own superstars — Jax Vullinghs — has been promoted to Principal at AirTree. This sort of promotion is a very big deal for us and for Jax — and so I wanted to share a little about what it means, and why we’re so lucky to be working with her. Jax first joined our little team back in 2018 and she had an immediate impact. Her intelligence, growth mindset and passion were all apparent from day one, and in a short period of time she’s helped us lead new and follow-on investments into companies like Canva, Spaceship, MetaOptima and A Cloud Guru (plus another couple that haven’t been announced quite yet). Jax has shown herself to be an extremely thesis-driven investor. As anyone who’s read her substack knows, she thinks deeply about tech, markets and platform shifts. She’s a voracious reader of anything she can get her hands on — from psychology to economic history to academic journals to poetry. The breadth of her interests is staggering, and it means she can connect dots across markets and industries in ways that others can miss. Jax has already done a huge amount to further our thinking in the D2C, fintech and healthcare spaces in particular. At the same time, Jax is an incredibly people-driven investor. She’s got a natural empathy with founders and a knack for connecting people together and building warm and supportive communities. Anyone who has been to one of her OnDeck dinners, or to a 10-to-100 event can vouch for this. And driving all of this is a sense of genuine curiosity. I have a theory that of all the various personality traits, success in VC correlates most highly with curiosity. Jax is a great example of this. She’s naturally interested about technology, and even more so about people. When she talks to founders she wants to understand who they are and what makes them tick. She leads with curiosity in everything she does. You can’t fake this — and founders love it. But you don’t have to take my word for it. I asked a few great founders to get their thoughts: “Jax is the kind of impressive human you rarely meet. She’s friendly, super intelligent, and committed to success. I look forward to her Substacks every couple of weeks, including the impressive array of books, blogs, and podcasts she devours. Jax was the first to introduce me to the AirTree crew, and several meetings later, we’ve still not run out of things to talk about, and I’m pretty sure I’ve learnt something new from Jax every time we’ve met. What a super star” — Dom Pym, CEO @ Up “Jax has provided some great support to Spaceship. Her constructive ‘checking and challenging’ has ensured we have been able to put our best foot forward, despite the uncertain business and macro environment. She’s also been very proactive in helping us make the most of the AirTree network and connections, providing introductions that have given us some exciting new business opportunities to pursue” — Andrew Moore, CEO @ Spaceship “Jax has a unique vision focused mind that gets to the core of our business almost immediately. It’s refreshing to skip the standard questioning and dive into exciting conversation about our potential and the real opportunity.” — Alex Zaccaria, CEO @ Linktree “Jax is one of the most insightful thinkers on consumer trends and brands in Australia, she’s been a valuable source of feedback throughout our time building Eucalyptus and i’m excited for the founders who are going to get to work closely with her in the future.”— Tim Doyle, CEO @ Eucalyptus The new role So what does this role change mean in practice? A lot. Like Elicia McDonald (who was also promoted to Principal at AirTree last year), Jax will now be leading her own investments and sitting on boards of our portfolio companies. That’s pretty great news for founders. And what sort of founders is Jax looking for? I thought it was best to ask her directly: “People whose eyes light up when they tell you how they’re solving a complex problem in a novel way. Who are both rational and emotional when describing how they’re going to build the team to tackle this problem in the next 6 weeks and the next 6 years. And who are good people first.” Jax If you haven’t met Jax yet — you’re missing out. Send her a quick hello on twitter or via email at jackie [at] airtree (dot) vc.
https://medium.com/airtree-venture/congrats-jax-airtrees-newest-principal-ce6ed7aebd2c
['James Cameron']
2020-08-07 00:15:36.462000+00:00
['Startups', 'Technology', 'Startupaus', 'VC', 'Insights']
560
Remote Viewing the Year 2050 With Stephen A. Schwartz
Remote viewing pioneer Dr. Stephan A. Schwartz recently discussed his precognitive remote viewing project of the year 2050, which was conducted in 1978. In addition to anticipating historical occurrences many of us would live to see, such as the fall of the Soviet Union, the advent of the smartphone, the appearance of HIV, and the increase in pandemics, his team also made some startling predictions we have yet to endure (and may not want to). He cautions against using remote viewing to look too far into the future, because conditions may be so foreign to what we are familiar with that we may have no reference point by which to describe it. Indeed, many of the accurate predictions conducted in 1978 which we have come to witness were unintelligible even at the time they were conducted. For example, he recounts his experience asking the Deputy Director of the National Institutes of Health (NIH) about whether or not a blood-borne disease could cross from non-human primates in Africa over to humans and result in a devastating epidemic, at which point the director told him to “quit smoking” whatever it was that led him to have such thoughts. Nevertheless, it was not long after that that HIV reared its ugly head and resulted in the deaths of countless millions of people. The sudden disappearance of the Soviet Union and the advent of a physical device that could be used to speak with someone and also see them as though they were on a small television, were equally unintelligible to anyone, including educated futurists, at that time. The tasking consisted of asking the viewers to go to a large city and look at it on the same day (in 1978) but in 2050. Viewers reported that Los Angeles and much of the West Coast, as well as Florida, had become inundated with water to the point of no longer being habitable. Dr. Schwartz is convinced this will result in the collapse of real estate value and the end of what he rightly refers to as “vampire capitalism.” These disasters, which will allegedly be the result of climate change, were no less unintelligible at the time because climate change as a serious political issue would not emerge until many years later in the early 1990s. His viewers also speak of a “great event” that occurs between 2040 and 2045, although it is unclear what this will consist of. Nevertheless, he is convinced, based on the results, that carbon-powered vehicles will be basically non-existent at this time and the petroleum industry itself will vanish into obsolescence. Although he believes the United States will endure as a de jure legal entity, he is convinced that the U.S. states themselves will merge based on a combination of regional and political proximity. At this time, records of this remote viewing project consist of a voluminous 15,000 pages. Be a writer for Remote Viewing Community Magazine. Founding editor Katherine T. Hoppe. Join the RVCM Facebook group
https://medium.com/remote-viewing-community-magazine/remote-viewing-the-year-2050-with-stephen-a-schwartz-8bfa8635ecb2
['Monad Mantis']
2021-02-08 05:54:43.059000+00:00
['Remote Viewing', 'Future', 'Life Lessons', 'Precognition', 'Future Technology']
561
Xpeng
EV company profiles — #3 Xpeng Xpeng P7 Sedan Xpeng represent the third China manufacturer that we are analyzing in this EV Company Profile Series, the previous being Nio and Li Auto. While Nio is trying to position itself into the premium sector and Li Auto is not exactly an EV maker but have the range extender hybrid technology, Xpeng seems to compete head to head on prices with Tesla. So far the results have seemed consistent and sales are picking up specially for the new P7 model, but lower prices equal to lower margins, and it is not easy to achieve profitability starting from zero, while competing with a company like Tesla, which already is profitable and can cut costs more easily due to higher volumes and margins. Moreover, there are many controversies around Xpeng, accused of stealing secrets from Apple’s autonomous car project, and more recently also of copying part of Tesla autopilot source code, by hiring a former Tesla employ. So much for a start, let’s carefully analyse the company and its future outlook together.
https://medium.com/@stefanoosellame/xpeng-43015a7e9e14
['Stefano Osellame']
2020-12-09 16:02:19.137000+00:00
['Self Driving Cars', 'Innovation', 'Technology', 'Tesla', 'Electric Car']
562
Musicians Generate Every Possible Melody and Make it Public
Musicians Generate Every Possible Melody and Make it Public Using algorithms musicians generated every possible melody to give access to other musicians. Damien Riehl and Noah Rubin generated and saved every possible MIDI melody and then shared it with public under Creative Commons Zero license, making it now possible for any musician to use it. Musicians generate every possible melody. Damien Riehl and Noah Rubin. The goal of their amazing feat is to make it easier for songwriters to write songs. They have published the code on Github. It is all about enabling musicians to create freely, without being scared of a potential law suit. If you’ve ever thought one song sounded similar to another, the culprit may not be an unethical forger, but rather the limited mathematical musical equations that our favorite artists have to work with. Copyright law is at risk of severely limiting future music creation and future human creativity. Riehl and Rubin developed an algorithm that recorded every possible 8-note, 12-beat melody combo. This is similar to what hackers use to guess passwords: brute force, that is getting through every possible combination of notes. Their algorithm generates roughly 300,000 melodies per second. In his Ted talk Riehl said: “Under copyright law, numbers are facts, and under copyright law, facts either have thin copyright, almost no copyright, or no copyright at all. So maybe if these numbers have existed since the beginning of time and we’re just plucking them out, maybe melodies are just math, which is just facts, which is not copyrightable.” You can see Riehl here:
https://medium.com/data-science-rush/musicians-generate-every-possible-melody-and-make-it-public-df408e2f751b
['Przemek Chojecki']
2020-02-27 14:00:42.127000+00:00
['Technology', 'News', 'Music', 'Programming', 'Law']
563
HyperX Cloud Alpha S 7.1 Gaming Headset Review
Launching today, the HyperX Cloud Alpha S is more than just an iteration on one of the best-sounding gaming headsets ever made. Indeed, it firmly takes its place as the “flagship” of HyperX’s mainstream wired headset offerings. It combines the stellar audio performance, design, and comfort of the Cloud Alpha with a features package that outdoes the venerable Cloud II. If you’re looking for the best headset that HyperX offers below $150, this is it. OVERVIEW The HyperX Cloud Alpha S is a closed-back, wired gaming headset with a robust features package. It sells for $129.99, and it’s available in a nice new blue color. That’s right, it’s a break from the standard red-and-black color scheme that has long been the company’s trademark. At a first glance, this looks just like the Cloud Alpha. But a number of smart build and feature changes have been made in exchange for your extra $30. The ear pads are thicker and more comfortable. If you’re not a fan of leatherette, you can switch to the newly-included sports fabric ear pads. The headband materials have been tweaked for a softer fit. The headset now features adjustable bass levels thanks to bass port sliders. And as the cherry on top, the Cloud Alpha S includes a new 7.1 USB audio card. Unlike some previous “7.1” offerings from HyperX, this is a true 7.1 audio device which registers as such in Windows 10. Not only that, but HyperX offers multiple firmware updates for users to choose from with different EQ profiles and virtual speaker positions. In many ways, this feels like the Alpha’s final form. It’s a powerful competitor for the recently-released Logitech G Pro X, and seems perfectly-designed for folks like me who were frustrated that Cooler Master omitted the Takstar Pro 82’s bass sliders from the MH752 when they licensed it as a gaming product. If you’re a PC gamer, this is one of the most robust options at this price point. And if you’re on consoles, the improved comfort and bass sliders may still be worth the $30 premium to you. SOUND QUALITY Since the Cloud Alpha S shares its ear cup design with the original Cloud Alpha, it has a small row of bass ports along the top of the cups in addition to the bass sliders near the bottom. Thus, I was expecting that with the sliders fully closed, the Cloud Alpha S would sound just like the original Cloud Alpha, and that it would add more and more bass in the two open positions. I was wrong. HyperX shipped my evaluation pair with the sliders in the middle of three positions, and in that position these sound just like the original Cloud Alpha. The bass is warm and accurate, and the much-hyped Dual Chamber Design really works to keep it separate from the mids and highs, which are as reasonably clean and accurate as they’ve ever been. I’ve long-recommended the Cloud Alpha as a great just-warm-of-neutral headset at this price, and that’s not changed with the S model. You can hear the Dual Chambers in action if you play around with the sliders while they’re on your head. Opening them all the way boosts the bass region by several dB, providing a fun slam to the low end. I’ve used this headset for the last four days, and spent most of my time with the sliders fully open. It’s easily my favorite of the three positions. It gives the headset a bassy thick sound, but without screwing up the rest of the range or thickening up the mids. It’s impressive, and while it’s not in the same league of bass response as HyperX’s exceptional Cloud Orbit, or as refined as the Arctis Pro Wireless, it’s darn close…which is all you can ask at this lower price. Closing the sliders cuts almost all of the low end out, sort of like the closed position on the old Custom One Pro. The bass ports on the tops of the cups that I mentioned earlier have been blocked off with some type of foam sound damping material, which is why you have to use the middle position to mimic the original Alpha. With the ports closed, the sound takes on a slightly anemic, slightly thin quality that I don’t love. You’ll hear bass energy trying to happen and then it’ll immediately cut itself off. On the plus side, closing the ports increases the isolation slightly and allows you to focus on the mids and highs. If you’re a footsteps fiend, or want to do some listening for hiss in a recording, or just need the most isolation possible, then it’s a good option. But I’d wager most people will want to use the middle or the fully open positions. Even with the ports open, you’ll still get enough isolation to use these in a loud coffee shop without blasting your audio. During my testing, I played several hours of Borderlands 3, GreedFall, and Resident Evil 5…a classic that I know the sound of very well. I personally like a little extra bass energy when I’m gaming, and the open position is great for that while still maintaining all the detail and clarity the Cloud Alpha is famous for. These are technically the most bass-heavy headset HyperX produces outside the Warm EQ mode on the Orbit, so if you’re a bass fan, this is the HyperX product for you. It’s also an exceptional headset for most genres of music, and a little kinder to badly-compressed pop music than a more critical “audiophile” headphone will be. The Logitech G Pro X offers a slightly more bright, more neutral sound…but it’s not as “fun” out of the box. Fortunately for HyperX, their virutal surround implementation totally destroys Logitech’s. VIRTUAL SURROUND DONGLE I was delighted when I plugged in the dongle and it showed up as a 7.1 device on my PC. This joins the Cloud Revolver S’s Dolby Headphone dongle as the second HyperX device to properly support 7.1 game and movie audio for PC users. Unlike the Revolver’s sound card, HyperX has taken a stab at crafting their own custom virtual surround system here. The results are quite interesting! On the firmware update page for the dongle, HyperX offers two choices of EQ profile. The “FPS” profile offers a slightly larger emphasis on bass for gunshots and explosions, and the “Sound Widening” profile is more neutral, with sharper highs. Unlike the abysmal Logitech G Pro X surround sound, neither of these profiles ruins the audio with too much bass. The new dongle has a white LED instead of the red one featured in the old models. That, and proper support for surround source audio. The EQ is a little tweaked over the standard stereo listening mode, but feels tuned to bring out the best the headset’s default sound signature has to offer. Neither available profile has too much fake room echo, and both offer a bit of a volume boost for those of you who want things louder. The “Sound Widening” profile offers a standard virtual 7.1 speaker setup with the channels emanating from the directions you’d expect. The “FPS” profile shakes this up, and sees HyperX going for it with their own custom virtual speaker placements that are unlike any other virtual surround system I’ve ever heard. The front left and right channels get pushed out to the sides, and the standard surround channels are closer in and seem to have a bit of a height component. It’s…interesting and different, and while I personally prefer the sound of the “Widening” profile, I think it’s awesome that HyperX tried something new with a truly custom virtual surround tuning that’s not just marketing hype. Installing a different firmware is a painless one-click process, and while it may not be as adjustable as the Cloud Orbit, it’s still nice to have this option. I look forward to seeing if they release any additional profiles in the future. Instead of the standard mic volume buttons other HyperX dongles offer, here there’s a game/chat balance feature. That’s probably a more practical inclusion, since I’d wager most users will set their mic volume once then never touch it again.
https://medium.com/@xander51/hyperx-cloud-alpha-s-7-1-gaming-headset-review-5b282977d67
['Alex Rowe']
2019-09-24 17:58:55.535000+00:00
['Gaming', 'Music', 'Headphones', 'Technology', 'Audio']
564
Comparison Between Reolink Argus 2 VS Argus Pro
Which has long night-lasting security camera reolink argus 2 vs argus pro? Both Argus camera retain their predecessor’s white capsule-shaped enclosure and their impressive specs: a 130-degree field of view, 1080p resolution, night-sight up to 33 feet, motion detection, and two-way audio. It also keeps the IP65 ingress protection rating, meaning neither harmful dust nor projected jets of water can get in and damage the camera. Today i will discuss about which one is best Argus 2 or Argus Pro? Full Comparison and Reviews: Reolink argus 2 vs argus pro Some changes were made to enhance the camera’s operation, though. one among the foremost significant was replacing the first Argus’ four non-rechargeable CR123A batteries with a 63V, rechargeable lithium battery. this enables you to charge the camera directly from an AC outlet, or, if you’re using it outside, provide continuous power by connecting it to Reolink’s Solar Panel — an accessory designed specifically for the Ar. Installation manual was downloaded from here. Goal of this blog is to supply an easy setup guide and a camera evaluation to be used in an external security scenario for capturing both pedestrians on a sidewalk and car license plates entering a confined cul de sac. Step-1 : Unpacking. The Argus Pro and Argus 2 cameras don’t accompany on board memory but instead a uSD card slot. They accept up to a 64 GB card. very first thing out of the box, insert a uSD ( about $10 for 64 GB in 2019 ). With this, the frustration begins. The opening is simply large enough for the cardboard , so you’ll need a fastener or long finger nail to truly push it altogether the way. Once it’s in — ditch ever removing it or replacing it. I installed 3 cameras and therefore the push-to-eject mechanism only worked on 1 of the three ( and stopped after one eject ). Why does this matter? Two reasons, likelihood is that 3–5 years from now the flash on the uSD card will wear out. Would you rather replace a $10 flash card or a $100 camera? Two, the camera stores standard H.264 video clips to the cardboard in standard Windows compatible filing system . If you’ll eject the cardboard , you’ll manually archive footage and consider during a standard browser without requiring the WiFi connection and proprietary Reolink PC Client software. I fully expect within the future i will be able to be opening up the housing of those supposedly waterproof cameras to aim to extract and replace the cardboard . Step-2 : Camera Configuration. Like many consumer products ( Amazon Echo for instance ) your smart phone is required to configure the cameras. I’m not an enormous fan of this because it just about guarantees 3–5 years from now as iPhone and Android technology moves on you won’t be ready to reconfigure all this camera kit you only spent $100s on. I much prefer PC web based configuration ( like for routers, etc ) as they’re far more likely to still work down the road. But what are you able to do? Anyways, for iPhone, attend the iTunes store and download the free Reolink app. Fire it up and follow the prompts. First it wants to understand your camera information by watching the QR code behind the battery — so before snapping the battery on the camera, run the iPhone app and point your iPhone at the QR code. Once it’s recognized it’ll invite your WiFi information. It probably already knows your home WiFi SSID, but you continue to got to enter the password in. After this is often done, your iPhone will create it’s own QR code on the phone screen. Insert the battery within the camera and point the camera at the phone until it recognized the QR code. The camera uses speech synthesis in English to prompt you — which is great ( if you speak English in fact ). that’s just about it. Your phone now has full control of the camera and you’ll use it to look at live video, change settings, etc. But being a phone, this all just about sucks. i would like the PC app for 24″ large screen video, mouse control, etc. So…. Step-3 : PC Client Configuration. Download the PC ( or Mac ) client from the Reolink website. It comes during a ZIP file, pull out the EXE and install it. Click on “Add Device” and now ignore everything the Software and therefore the Manual tells you. DON’T click the “Scan Device in LAN” button — because it will NEVER EVER EVER find your battery powered Reolink Argus devices albeit they’re on your LAN. This button isn’t meant for you to use you Argus camera buyer you. The manual doesn’t say this, the software doesn’t say it — SO i’m TELLING YOU THIS immediately . Don’t press the button. it’ll be extremely disappointing. Also DON’T enter the IP Address of your camera ( available on your iPhone App ) and plan to enter Media Port 9000, Name, Password etc as that won’t work either for the Argus devices. Both of those methods must for for other Reolink products ( POE wired cameras, etc ). What is EXTREMELY obscure that I finally located searching and checking out assistance is that you simply got to select Register Mode = UID ( rather than “IP Address” ). Now finding the UID code to enter is additionally EXTREMELY frustrating. On the rear of the Argus there’s a small sticker about the dimensions of a pencil erasure. Not the large sticker behind the battery, but on the surface of the camera. this small sticker features a QR code and beneath it there’s a 16 alphanumeric UID in an incredibly small font. I had to use a hand glass to read it — and even then there have been problems. one among my three cameras contained a “I” within the UID, but it looked a bit like a “1” within the super small font. It took multiple tries to urge it right. for a few reason, the iPhone App doesn’t report the UID for the cameras within the information page after setup. This whole UID thing was a nightmare. Once I got through it things are going much smoother. Specifications comparison — Argus 2 vs Argus Pro Reolink Argus 2 specs Video Format: H.264 Wireless Security: WEP(ASCII)/WPA-PSK/WPA2-PSK WiFi Standard: IEEE 802.11b/g/n Camera Size: 3.8 x 2.3 x 2.3 in Weight (Battery included): 8.1 oz Operating Temperature: 14° to 131° F 1 x Pack of Screws and Mounting Hole Template 1 x Quick Start Guide and Surveillance Sign Argus Pro specs Video Format: H.264 Wireless Security: WEP(ASCII)/WPA-PSK/WPA2-PSK WiFi Standard: IEEE 802.11b/g/n Camera Size: 3.8 x 2.3 x 2.3 in Weight (Battery included): 8.1 oz Operating Temperature: 14° to 131° F PIR Detecting Distance: up to 30 ft PIR Detecting Angle: 130° horizontal In the box — Argus 2 vs Argus Pro What are the similarities or differences — Argus 2 vs Argus Pro? Night Vision Even though Argus 2 and Argus Pro have an equivalent case, there are still two differences between them: Argus 2 is embedded with Starlight 1080P sensor while Argus Pro comes with a traditional 1080P sensor. Argus 2 supports both outdoor security mount and wall magnetic mount, while Argus Pro only supports the previous . Viewing Angle Argus 2 Wider field of vision to hide more and see more. The battery powered outdoor camera with 130° wide viewing angle never miss a detail. PIR Detecting Distance: up to 30ft Argus Pro comes with the battery powered outdoor camera having 130° wide viewing angle to urge every details. Motion Detection Record Argus 2 sends instant motion alerts via email or notifications push to your device once triggered. Live view with two-way audio get you involved directly . 75db siren alarm helps to threaten potential-burglar away. Argus Pro also share an equivalent feature — Instant motion alerts via email or notifications push to your device once triggered. The potential-burglar are always stand back because it has customized siren alarm. Works with Google Assistant Argus 2 Hands-free smart home control experience by saying “Hey Google, show me the backyard” and you’ll see live feed from the camera on your Google Nest Hub or Chromecast-enabled TVs. Argus Pro Just say “Hey Google, show me the backyard” and you’ll see live feed from the camera on your Google Nest Hub or Chromecast-enabled TVs. Lasting power with solar option Equipped with a chargeable battery, Reolink Argus 2 has long lasting power per charge that saves your energy. It also can get non-stop power when working with Reolink solar array (Sold separately). Argus Pro can go totally wire-free because of the rechargeable battery and WiFi connectivity. No wiring and no drilling, Argus Pro is portable to suit in anywhere. How to found out and use? The Reolink app’s guided setup is simple and painless, requiring you to scan a few of QR codes — one on the camera, the opposite displayed within the app itself — to connect the camera to your network. The capsule-shaped camera features a flat bottom, so if you’re using it indoors you would like only place it on a table of shelf. Unlike the Argus 2, the professional doesn’t accompany a magnetic ball mount for affixing the camera to a wall. Setting the camera up for outdoor use may be a little more involved. You’ll first got to lock it to the outdoor security mount, then screw the mount to an exterior wall or eave. a group of screws and a mounting hole template are included with the camera to simplify the work . A mounting strap is additionally provided, which threads through rock bottom of the mount so you’ll loop it around a tree or fence post if you’d rather. If you’re getting to use the solar array , you’ll got to also screw its mount adjacent to the camera (again screws and a mounting template are included, but there’s no strap), then slot the panel to the mount and angle it to receive direct sunlight. Finally, you want to connect the solar array to the camera with the supplied micro-USB cable. Similar Post: Reolink vs Amcrest Reolink Security Camera System Review
https://medium.com/@getlockers1/comparison-between-reolink-argus-2-vs-argus-pro-827e9f0f7c4c
['Smart Locks']
2020-10-11 07:03:38.778000+00:00
['Câmera', 'Technology', 'Reolink', 'Security Camera', 'Safety']
565
How It All Happened at the ‘Everything of Foresting’ Meetup at DeCentre Blockchain Caffe
How It All Happened at the ‘Everything of Foresting’ Meetup at DeCentre Blockchain Caffe DeCentre Blockchain Cafe in Gangnam Over 100 blockchainers and crypto enthusiasts turned up for the maiden Foresting Meetup, held at the DeCentre Blockchain Caffe in Gangnam, last Tuesday. Titled ‘Everything of Foresting,’ the event aimed at unraveling everything that Foresting is about, to a teeming audience. The Caffe happens to be the first blockchain-themed coffee shop and coworking space to be opened in Korea. It is a safe and cool space for the general public, especially crypto-maniacs to meet, share knowledge and talk blockchain. The floor is very spacious enough to accommodate over 100 people for startup and blockchain events. Participants at the Meetup The Foresting Meetup brought together blockchain industry experts, ICO advisors, token investors, and enthusiasts to learn more Foresting’s services, but more importantly, the 1st Public ICO Pre-sale which will end on Friday, August 24, 2018.
https://medium.com/foresting/how-it-all-happened-at-the-everything-of-foresting-meetup-at-decentre-blockchain-caffe-e3cce28c3828
['Williams Nana Kyei']
2018-10-19 01:39:04.661000+00:00
['Events', 'Blog', 'Cryptocurrency News', 'Blockchain Technology', 'Meetup']
566
How to Launch Apps by Back Tapping Your Android Phone
What’s Tap, Tap? Tap, Tap APK grants you access to Android 11s Double Tap Gesture on the back of any device running Android 7.0 or above. Tap, Tap is a port of the double tap on back of device gesture from Android 11 to any Android 7.0+ device. It allows you to use the gesture to launch apps, control the device (including pressing the home, back and recents buttons), take a screenshot, toggle the flashlight, open your assistant and more. Tap, Tap uses the same machine learning code and TensorFlow models from the Android 11 builds with the gesture, with code directly lifted from SystemUIGoogle where needed. Tap, Tap is currently in beta and was released by the developers due to the public interest in the feature. Hence, for now, you cannot download it from the Google Play Store. However, you can download the app and install it on your device. How to sideload Tap, Tap While on your Android device, please follow this link to the XDA forum thread. Afterward, download the most recent APK file. In case you haven’t allowed installation from unknown sources, you’ll get a pop-up taking you to the browser setting: slide to Allow From This Source. Then, you can go back and proceed with the installation. How to launch any app with Tap, Tap Now that you installed Tap, Tap, the real fun begins. First, enable the Accessibility Service. This allows the app to perform actions based on your taps. Screenshot by the author Then under Accessibility, enable Tap, Tap. Screenshot by the author Afterward, let's enable and configure the Gesture settings. You have to first enable Gesture; then, you can either select Double Tap Actions or Triple Tap Actions. I have only used Double Tap so far. Screenshot by the author Click on Double Tap Actions; then Add Action; Launch; Launch App; press the + sign; and select the app you would like to launch by back-tapping. Screenshot by the author The latest version of Tap, Tap will also allow you to launch Google Assistant, take screenshots, or use utilities like, for instance, the flashlight just by tapping the back of your phone. There’s a lot more, but I will let you further explore all the cool actions available. However, in this case, we want to launch our favorite app quickly. Thus, you have to drag the launch app feature to the top of the menu (click and hold to drag). Voilá, you are all set to start tapping! I hope you enjoy Tap, Tap, and this friendly way to interact with apps and gain precious time. Feel free to use the response box to share your further insights. Your feedback is highly appreciated.
https://medium.datadriveninvestor.com/how-to-launch-apps-by-back-tapping-your-android-phone-3551346eb550
['Rui Alves']
2021-02-22 15:54:08.230000+00:00
['Tap Tap', 'Digital Life', 'Apps', 'Android', 'Technology']
567
The CISO Dilemma
When Leadership Ignores Risks What should a CISO do when the executive leadership chooses to ignore critical cyber risks? If the C-Suite and board are well informed of imperative vulnerabilities and yet choose a path to ignore security, the CISO is put in a position where they are incapable of effectively managing risk, yet still responsible when incidents occur. Let’s break down the problem, from what a CISO must do, how people disposition risks, and finally the recommended actions.
https://medium.com/@matthew-rosenquist/the-ciso-dilemma-a00c7e329b
[]
2020-12-24 17:44:58.911000+00:00
['Infosec', 'Videos', 'Cybersecurity', 'Management', 'Technology']
568
Digital Identity as an Investment
What is Digital Identity? Digital identity (DID) can be broken down into two distinct aspects. The first is “the fact of being who or what a person or thing is¹.” This first aspect can be referred to as foundational identity and is generally indicated by credentials: legal name, passport, SSN, and other officially issued forms of qualification. It is inextricably tied to the physical world (via artifacts or biometrics) and the institutions that govern it. ID2020 has created a useful framework outlining the properties of a foundational digital identity, advocating that a responsible digital ID be: personal, privacy-preserving, portable, and persistent. The Omidyar Network further outlines five design principals for Good ID: privacy, inclusion, user value, user control, and security. De-duplication and fraud prevention are top of mind concerns for foundational DIDs. Policymakers and governments are instrumental in developing the foundational identities upon which functional identity can be further developed by the private sector. The second aspect of identity can be defined as “the characteristics determining who or what a person or thing is¹” and is largely comprised of online attributes: email address, likes, follows, purchases, etc. This aspect of identity can be referred to as functional identity and is less reliant on the physical world, since these attributes are not usually verified by a third-party, and can be established entirely via one’s online behavior. Thus, functional identity is inextricably tied to data. Functional identities require a different framework in which persistence may not be a desirable quality, as users may want to isolate different online interactions and prevent correlation in order to preserve privacy or operate under multiple pseudonyms. Microsoft has put forth a framework that involves primary (persistent) and non-pairwise (non-persistent) digital identifiers to allow for this flexibility. Functional identity is often monetized in ways that are extractive. Both forms of digital identity can be provisioned and verified using centralized or decentralized methods. Decentralized digital identity is often referred to as self-sovereign identity (SSI) and exhibits the principals identified by Christopher Allen below. Proponents of self-sovereign identity advocate for an architecture in which the user owns and controls their own identification data, to be provisioned out to service providers. This stands in contrast to the current system in which each service provider replicates and re-verifies a user’s data. Oftentimes, SSI leverages distributed ledger technology. SSI is not to be confused with federated identity systems, which encompasses single-sign-on (SSO.) SSO (“Sign in with Google/Facebook/Apple”) attempts to create one on-boarding process that grants access across sites and services, but results in the accumulation of a large amount of personal data by the single authenticating party. Why Does Digital Identity Matter? At the most basic level, foundational identity consists of data points that are recorded on birth certificates, passports, and state issued IDs. The problem is, these forms of identification require the maintenance of physical artifacts in an increasingly digital world, are completely reliant upon the central authorities that issue and validate them, and are susceptible to theft and fraud. ~1.1 billion people globally lack a legal form of identification⁵, preventing them from accessing financial services, purchasing real estate, voting, and partaking in a myriad of other important activities. While this article will focus primarily on private sector approaches to functional identity, please see The Impact of Digital Identity for more on foundational identity. From a functional identity perspective, since the Internet lacks a native identity layer, each Internet service provider is forced to conduct authentication procedures individually. Consumers are thus forced to share their personally identifying information (PII) with many different service providers. These service providers are, in turn, required to store and safeguard this sensitive data. Duplication and replication of this data is inefficient and creates many points of vulnerability. Enterprises don’t want this liability. This system also creates a negative user experience. When consumers have to provide the same authentication data to multiple service providers, it slows down on-boarding processes for new interactions and increases the time required to engage in existing relationships with service providers. Valuable time is wasted retrieving and resetting passwords and, given the number of distinct accounts consumers maintain, consumers are likely to use the same passwords across accounts. This leaves consumers vulnerable to identity theft. Why is Digital Identity Challenging? The Sovrin Foundation breaks down the challenges of digital identity into five categories, illustrated below. I’ve added completeness as a sixth challenge. Finally, all identity solutions face an inherent trade-off between security (effectively and appropriately restricting access and excluding bad actors) and frictionless access (improving convenience and speed, and including more good actors.) Why is Digital Identity Relevant Now? While not immediately obvious, COVID-19 is a catalyst for DID. The most obvious impact, while fraught with ethical issues, is the need to monitor health status as we return to economic activity during the pandemic. This process could require citizens to carry “immunity certificates,” which are essentially digital IDs tied to health data. A large portion of the global population may soon be equipped with a digital wallet that holds their unique digital identity and digital assets (ie. health certificates.) There are also many second-order catalysts related to the pandemic. Given the shift to remote work, enterprises face a real authentication challenge as their workforces access sensitive data and engage with an increasing number of applications (approaching 100, on average³) via remote devices. The reliance on processes that leverage in-person verification has resulted in delays and dysfunction across a large number of critical processes and, in some cases, has resulted in increased fraud. Finally, digital identity is a key enabler of the move towards cashless societies, accelerated by the pandemic given the need to quickly and accurately distribute funds and the desire to avoid physical currency. What are the Components of Digital Identity? There are many aspect to one’s identity. The World Economic Forum breaks down the technical identity stack into the above layers. The layers are discussed in more detail below, somewhat proportional to the level of startup activity in each. Standards such as SAML, WebAuthn, OpenID Connect, OAuth have been, and will remain, critical to the development of the digital identity ecosystem. New digital identity protocols and standards, many for decentralized architectures, are also being developed. Solid, an open source project led by Sir Tim Berners-Lee; Sovrin; Blockstack; and Microsoft’s ION(Identity Overlay Network to be built on the Bitcoin protocol in conjunction with the Decentralized Identity Foundation) are examples. Protocols tend to be open source and are often viewed as public utilities. While value can accrue to these “public utilities” (ie. Ethereum), they could require a longer investment horizon as they must effectively incentivize developers to build services and products on top of a new network. Attribute Collection involves the processes by which characteristic data is collected and stored and encompasses personal data stores. 3Box and Blockstack are startups building decentralized solutions in this layer. Authentication is perhaps the most crowded space within the digital identity stack. Authentication answers the questions “how do I prove who I am?” and “how do I prevent others from pretending to be me?” It also encompasses identity-related fraud reduction and security solutions such as Sift Science (leveraging machine learning to reduce fraud) and SentiLink (combating synthetic identity fraud.) Comprehensive ID: Completeness is one of the key challenges of DID. Global iD is working on this challenge by operating a sort of “DNS for identity”, in which identity verifications are attached to a name located in GlobaliD’s public namespace. Users can have more than one name (which can also be privacy-preserving), but GlobaliD enables traceability in a way that creates a complete view of a user. GlobaliD acts as a sort of identity backbone, connecting to identity verifiers across silos, including self-sovereign identities. Unum ID is a startup that is working to create a decentralized, federated ID so that users have one digital identity that they can use to access all services. Completeness is one of the key challenges of DID. Global iD is working on this challenge by operating a sort of “DNS for identity”, in which identity verifications are attached to a name located in GlobaliD’s public namespace. Users can have more than one name (which can also be privacy-preserving), but GlobaliD enables traceability in a way that creates a complete view of a user. GlobaliD acts as a sort of identity backbone, connecting to identity verifiers across silos, including self-sovereign identities. Unum ID is a startup that is working to create a decentralized, federated ID so that users have one digital identity that they can use to access all services. Reusable ID: Reusable know your customer (KYC) verifications aim to reduce duplication and redundancy in the authentication process. Civic and Trusted Key (acquired by Workday) are blockchain-based startups working with enterprises to facilitate reusable KYC. Once an entity has verified a user, other enterprises can leverage this KYC, provided they trust the authenticating entity. Authenticating entities are compensated for their verifications. Reusable know your customer (KYC) verifications aim to reduce duplication and redundancy in the authentication process. Civic and Trusted Key (acquired by Workday) are blockchain-based startups working with enterprises to facilitate reusable KYC. Once an entity has verified a user, other enterprises can leverage this KYC, provided they trust the authenticating entity. Authenticating entities are compensated for their verifications. Passwordless ID: In 1995, Bill Gates claimed that passwords were dead², a claim that has been repeated over the decades. However, advances in both hardware and software, combined with government efforts on foundational identities (you need something against which to match biometrics), may finally have created a conducive backdrop for passwordless solutions to succeed. Beyond Identity, Secret Double Octopus, and HYPR are all working on passwordless authentication. Companies like Smile Identity and Element are combining biometrics with mobile phones to enable authentication in developing economies in Africa and Southeast Asia. Callsign is similarly leveraging biometrics, and other advanced techniques, to enable mobile authentication globally. Attribute Exchange involves how data is exchanged between entities and encompasses privacy-preserving methods for data exchange. Data encapsulation is one approach, which keeps data private and confidential while allowing identity verification via a protocol that enables a common source of truth. These systems can then leverage selective disclosure, whereby third parties can verify attributes without accessing the entirety of the underlying data (ie. a person is above 18 years old, a passport matches the one on file, etc.) uPort and Oasis Labs are two companies building decentralized protocols for attribute exchange. Authorization involves permissioning and access management. It answers the questions like “is this person allowed to enter?” or “is this person allowed to access this file?” Companies such as Proxy enable authorization via mobile access (turning a user’s mobile phone into an accepted ID.) Since more US adults own a cellphone than a driver’s license, access is improved while overcoming the challenging economics of non-smartphone, hardware-based access approaches. OpenPath is another startup enabling mobile access. In practice, authorization relies on authentication, and therefore startups that operate in the authorization layer also authenticate users. Service Delivery encompasses identity-as-a-service providers and password managers. This is the layer in which the biggest valuations, and public companies, reside. Identity-as-a-service providers abstract the complexity of authentication workflows and enable many different authentication approaches. Okta is a public, cloud-based, enterprise identity management solution with +100M users and Auth0, recently valued at $1.9B, is an identity-as-a-service provider for developers that abstracts the complexity of identity management. ForgeRock and OneLogin are later stage startups that operate identity and access management platforms. Persona is an early stage startup that has built developer tools that essentially create an API for identity, which is needed by companies that lack the expertise to build strong authentication and verification services in-house. Veriff is another startup that has built developer tools that aim to provide the fastest and most thorough log-in experience for users by collecting the most information about users in the fewest steps. Password managers are also important players in this layer, including Dashlane, 1Password, and LastPass. All of these services reduce the complexity of identity flows. Is Digital Identity “Investable?” It’s hard to define what constitutes an identity company. For starters, identity is a hard sell as an application in and of itself, but many times identity is actually at the core of a business. For example, Fast enables one-click authentication and check-out, which improves the e-commerce experience for both shoppers and merchants, but is also a very powerful combination for taxes, investing, job or mortgage applications, and even checking in at the doctor’s office. There are many such companies, that upon closer inspection, are actually identity plays. While identity is not a sector, it is relevant in very large sectors including communications, financial services, and healthcare. Even the gig/passion economy is highly dependent on identity as a means to create trusted marketplaces (see Passbase.) Even so, the direct identity opportunity set remains limited to 300–500 startups and it’s difficult to make the case that there are deep exit opportunities as the list of potential buyers is limited. Identity solutions face very high minimum scale requirements and, therefore, identity startups must create or connect to a platform of some sort to generate real utility (ie. Okta has +6,500 integrations.) Thus, IPO opportunities for standalone entities also seem limited. Identity startups face real barriers to entry (regulatory, compliance, and trust challenges at par with FinTech.) They also have to compete with platforms like Microsoft/Salesforce, which may ultimately become the dominant purveyor(s) of digital identity. Partnering with consortia may be a way for startups to “bootstrap” scale and compete against these established platforms, and some are employing this strategy. For example, a consortium of banks has been partnering with SecureKey as an authentication provider and PayID is a consortium of blockchain-based payments companies hoping to establish a universal payment identifier. Whether you view identity as an “investable” opportunity will depend on whether you take a narrow or broad view of identity, whether you’re thinking of foundational or functional identity, whether you view it as a technology or a service, and whether you’re more interested in access or security. Foundational identity efforts are better suited to grant or impact funding. The Omidyar Network, the Gates Foundation, and the Mozilla Foundation all invest in foundational DID. Functional identity isn’t viewed as a category of it’s own, so it’s hard to find venture investors that focus specifically on identity. Funds that invest in identity range from dedicated funds such as PTB VC, to strategic investors such as Okta Ventures and SamsungNext, to generalist funds like First Round Capital (Persona), Kleiner Perkins (Proxy, Dust ID), NEA (Beyond Identity), and Andreesseen Horowitz (SentiLink.) The best identity solutions are privacy-first, nearly invisible, and improve convenience and/or security for customers. The most compelling opportunities are in authentication and service delivery and have go-to-markets that target enterprises or developers rather than end consumers. Consumers don’t want to take on the onus of identity management and customers are not interested in the underlying architecture of DIDs. The best identity solutions are intelligent, secure, simple, and convenient. Successful startups enter the market with a narrowly scoped initial use case (ie. mobile access or compliance with new regulations) and then gradually expand, adding products and features as they move closer to an identity platform over time. What is the Future of Digital Identity? The centrality of digital identity in our increasingly online lives means DID will only grow in importance. Below, I’ll outline just a few future opportunities.
https://medium.com/digital-diplomacy/digital-identity-as-an-investment-d6c2ef21431d
['Justine Humenansky']
2020-07-28 14:43:54.026000+00:00
['Venture Capital', 'Innovation', 'Technology', 'Identity Management', 'Digital Identity']
569
Two Ways to Check If You Completely Suck at Programming
PROGRAMMING Two Ways to Check If You Completely Suck at Programming Because you probably don’t. Art by siscadraws Learning to code is hard. I’ve gone through countless YouTube videos, online courses, and personal projects. Yet somehow, I feel like I’m no further along than when I started. As I continue reading articles on programming, data science, and cloud computing, it’s hard not to feel overwhelmed and intimidated by the cool stuff everyone else is doing. Even looking at the Python subreddit makes me wonder how people who have only been learning for a few months can build such complex projects. I started the CS50 course by Harvard, which I would highly recommend to anyone looking for an introduction to Computer Science. After every lecture, you’re tasked to write code to solve an assignment based on that week’s topic. I recently finished Week 3 on “Algorithms” and completed the second assignment. It took me almost six hours. When my code finally passed all the tests, I was more relieved than ecstatic. Staring at a seemingly simple problem and not knowing how to fix bug after bug for so long made me question whether I even had any problem-solving skills to speak of. I also wondered how long it took others to solve. Surely most people (from students of Harvard to the thousands of enrolled in the online course) didn’t need to spend half a day on one assignment. Thankfully, before I spiraled into a black hole of self-doubt (mild exaggeration), two events put me back on track. They helped me realize that while it’s great to always want to learn more, it’s also important to believe in your abilities so far. You might be further along than you think. Let’s dive into the two things you should do to check if you suck at programming! 1. Build something for someone I recently looked at some code I wrote for work a few months ago and thought “wow, this is bad.” I shook my head and started to untangle the mess of functions and variables I’d created, wondering all the while why I didn’t add more comments to it. My girlfriend saw me grumbling to myself as I typed away and asked what was wrong. When I told her that my code sucked, she asked, “Does it work?” “Yeah, it works, but it’s so messy and I barely remember how I wrote this in the first place.” I said. “Well, you’re already almost done fixing it now, so that means you do remember.” she replied. Then she asked again: “Does it work?” It’s a simple question, yes, but it can reveal so much. First, I will preface the rest of this section by saying that yes, it’s important that what you build is built well (scalability, readability, avoiding deep nesting, DRY, etc.). I’m not saying that you should be happy if you write garbage that technically fulfills its intended function. However, if you what you built serves its purpose and isn’t completely confusing when you need to explain how it works, you should be proud! This brings me to how you can test your programming skills by building something for someone. There’s an incredibly satisfying feeling that you get watching code you wrote help someone else. My girlfriend recently needed to translate text from PDF files. However, the PDFs she’s working with are scanned images. She can’t actually copy any of the text, so she’s had to manually type paragraphs into Google Translate. Also, since she only needs a few paragraphs of the PDF translated, using an online service that would translate the whole document (or even a page) wouldn’t be very helpful. I saw what she was doing and thought to myself “Hm, I could probably help with that”. I then wrote a really, really simple CLI application for her that would take an image as an argument, get all the text from the image with pytesseract , then copy that text to the clipboard. Typing paragraphs into Google Translate isn’t a hugely time-consuming task. With that being said, even if I’ve only shaved off 30 minutes of her work every day, that’s still an extra 30 minutes of her life that she gets back! Identifying a solution and building it to help her, strangely enough, helped me realize that all my learning so far isn’t confined to my head. I could actually use it to make other peoples’ lives better. And you can too! Look around to the people close to you, whether it’s your family or colleagues, and I’m sure you’ll be able to find something that you can help them automate. Being able to think of and develop a solution, even if its one for a trivial problem, is programming. 2. Teach something to someone This is advice that you’ve probably seen before. Teaching someone a concept you just learned is a great way for you to retain information. This is because it’s impossible to teach without first understanding the subject. If you try to teach someone a new concept without having mastered it, you may be able to recite a basic summary, but you won’t be able to answer any of their follow up questions. I first learned about list comprehensions in Python a while back and I’ve used them in various scenarios since. However, I only really knew that I understood them when I helped my girlfriend with one of her scripts. She was looking for a way to automate moving files into different folders based on part of the filename matching a folder name. She first wanted to create all the folders based on the filenames. For example, a file called 12345_file.pdf was in the directory, which meant that she needed to create a folder called 12345 . To do this, she wanted to implement a for loop to get the first digits of all the file names and put them in a list, then loop through that list to create all the folders. However, there were some cases where some of the first digits were duplicated. There could be another file called 12345_newfile.pdf , 12345_oldfile.pdf , and so on. This would lead to a list containing duplicate values, which would not work. To simultaneously avoid using for loop to create a list and remove all duplicates from the list, I wrote this for her: folder_name_list = set([i[:5] for i in original_filepath_list]) We already had a list of the file paths, so I explained to her that you could use a list comprehension to create a list. You could define the components of the list and any conditions for it using a list comprehension instead of making an empty list and looping through and appending new elements to it. In this example, I would take the first 5 characters of each element in the existing list of file paths. Then, to remove all duplicates from the list, I just converted the list into a set, because a set in Python cannot have duplicate values. Going through that process felt great because: Removing duplicates from a list is an incredibly common task, so being able to explain how to do it quickly and without looking anything up meant I had really practiced doing it enough. Being able to explain a (relatively) more advanced operation like list comprehensions meant that I was doing more than just concatenating strings and stuff. I got to help someone out again! I watched as someone learned something new and got to implement it in their own code, which meant that my learning resulted in a positive tangible impact in someone else’s life.
https://towardsdatascience.com/two-things-to-do-to-check-if-you-completely-suck-at-programming-ce48b906243d
['Byron Dolon']
2020-09-01 22:01:15.029000+00:00
['Software Development', 'Learning To Code', 'Technology', 'Programming', 'Data Science']
570
How Can Innovation Eradicate The Pharma Supply Chain Management Risk?
Innovations in healthcare are going to be essential to subsequent generation of supply chains thanks to higher turnaround times to reply to emergencies. The pandemic pharmaceutical sector, like all other industry, witnessed the availability chain disruption. With the lockdown restrictions being partially uplifted, the assembly activity can resume , as there some relief within the movement of essential goods. Challenges Faced by the availability Chain The supply chain has been massively impacted by the cancellation of passenger flights and reduced freighter capacities. The pandemic impacted the ocean, also as land carries. There was a huge disconnection between ground realities and therefore the notification, however throughout the lockdown, there was more clarity on the way to overcome the challenges around land transportation. While the govt is performing all the measures to ease the movement of essentials, the availability chain has been impacted due to the cancellation of International flights leading to the capacity reduction for Air Freight, congestion at Airports, shortage of drivers, Seaports & availability of adequate labor across the country. Check Out — Healthcare Business Review Magazine The availability of transport capacity was also a big concern during lockdown scenarios where most drivers returned to their home base. Steps Taken It was essential to make sure that the movement of essential commodities, especially the health care supply chain, remains stable at this critical time. For this, companies have come up with various alternatives around passenger flights and freighter cancellations. The initiative has been undertaken to support the purchasers who are into healthcare, continuous manufacturing, and mission-critical applications, mostly vaccine manufacturing, personal protective equipment, and health care equipment. along side the charter facilities, security escort services also are being engaged for the safe movement of cargo to mitigate any challenges within the current situation. during this process, companies had to customize their transport flows and transit time counting on the zones or districts they were crossing because the interpretation of rules from zone-to-zone and district-to-district was very different.
https://medium.com/@healthcarebusinessreview/how-can-innovation-eradicate-the-pharma-supply-chain-management-risk-b406e5a07355
['Healthcare Business Review']
2020-12-22 06:57:20.278000+00:00
['Supply Chain', 'Pharma', 'Healthcare', 'Technology', 'Technews']
571
Apple Might Make a Big Jump
Apple is another name for Innovation! Be it Newton’s law of universal gravitation or Steve Jobs multinational technology firm Apple Inc. the roleplay of innovation has broadened the expanse of discovery. For thousands of years, the most dramatic events centrally focus on human development through experimental analysis. On comparing the previous history, there is a transcendental shift in human progress at personal fronts and a technological level. The reason for this progressive advancement is the “Think Different” and “Be Creative” approach. In the information age, the individual’s approach to seeking knowledge, resolving queries, disseminating information, and communicating has touched the roots of modernization, especially through digital developments. The technological company Apple Inc.’s growth boom in designing, developing and selling consumer electronics, computer software, and other online services centrally rest on thinking differently. With this innovative and creative impulse, Steve Jobs managed to change the world of technology and design far beyond anyone’s imagination. And the company’s pace is speeding up with time, emerging as a fierce competitor and challenging the global markets. This time Apple might make a potential-jump as a search engine giving competition to Google. This came as no thunderbolt to me. Somewhere down the line, people might have considered the possibility of other search engines. But, precisely when was the uncertainty. Apple certainly has turned our probable analysis into reality. With technological advancements accelerating at a rapid pace, it is natural for the market to become competitive. According to the Financial Times report — Apple has accelerated work to develop its own search engine that would allow the iPhone maker to offer an alternative to Google. For context, this behavior has been witnessed for a while as people have been observant about the feature popping up in beta versions of iOS. Jon Henshaw of Coywolf had noted back in August that the search volume is rising incredibly from Apple’s crawler. As per the Financial Times, Apple is developing its own search engine technology as the United States antitrust authorities threaten multi-billion dollar payments, which Google makes to be the iPhone’s primary engine. As per the lawsuit, the tech giant misuses its power to shut down its competitor in search ads. While Apple has been earlier focussing on its in-house search development, the lawsuit against Google made it explore the opportunity. To discover the opportunity requires critical analysis, market survey, taking calculated risks, and of course, thinking differently. The Founder Steve Jobs greatly inherited these skills, and the legacy is transferred thereafter. He quoted in the Apples “Think Different” campaign: “Here’s to the crazy ones — the misfits, the rebels, the troublemakers, the round pegs in the square holes. The ones who see things differently — they’re not fond of rules. You can quote them, disagree with them, glorify or vilify them, but the only thing you can’t do is ignore them because they change things. They push the human race forward, and while some may see them as the crazy ones, we see genius, because the ones who are crazy enough to think that they can change the world, are the ones who do.” Thus, Apple’s success with progressing times is an eye-opener towards grabbing the right opportunities with creative thinking. Presently delving into search engine technology, reports mentioned that Apple two years ago hired Google’s head of search, John Giannandrea, in a move designed to improve artificial intelligence capabilities and its Siri virtual personal assistant incorporated as a feature of Apple iPhones. Siri’s increase in search activity could be explained by getting more search queries and acting as an interlocutor between Apple and other search services like Google or Microsoft’s Bing. Previously, Google began this disintermediation, had modified and expanded over the years to combat a similar kind of behavior from Siri. As of now, unclarity resides in how Apple will execute its search engine application. It also becomes tumultuous because of Google’s global dominance in the technology industry and people’s trustworthiness. The matter has taken a point of discussion with no conclusion. Some reports claim that Apple will compete with Google and have its own websites and apps for phones. Contrarily, other reports mentioned that it would just be a feature to boost iOS devices’ spotlight. So let’s collectively reason out about the search engine happening, digesting, and analyzing the latest developments. Because as Steve Jobs rightly said —
https://medium.com/discourse/apple-might-make-a-big-jump-c5625634cfcd
['Swati Suman']
2020-11-19 15:27:14.495000+00:00
['Technology', 'Innovation', 'World', 'Artificial Intelligence', 'Startup']
572
This is Magical: Rock Sugar 2-in-1 Power Bank and Wall Charger
Technology is something that surprises me every time. When you think you’ve seen it all — you just take a closer look and get a new dose of technological wonder. Around 17 years ago I’ve been dreaming about a small gadget to be able to take it anywhere and to watch movies. Now I have that gadget in my pocket. Now I dream for it to be charged the whole day. But, with this thing it should not be an issue. The only thing I am mad about is that I was not the one who came up with this idea. Just imagine combining two the most obvious gadgets: power bank and wall charger. Sounds pretty logical, right? Yep, it definitely does but I can’t see me selling it to the people in need. Let’s be honest: one of the biggest issues of modern gadgets happens to be their battery. No matter the capacity, they loose juice pretty fast. An, half of the day we enjoy touchscreening and the rest of the day we search for a spot to charge our smartphones. We carry power banks to be mobile and be able to charge our gadgets on the go plus we carry our wall chargers in order to charge our power banks and our gadgets when power banks are empty. Putting a reasonable power bank in a wall charger a size of MacBook charger is genius. After all, sooner or later, you will have to have both of them charged. Now, combining one with the other, you do not have to worry about charging two separate gadgets. Just plug one battery box in any socket, plug in your smartphone in that device and get them both juiced up simultaneously. Luckily, there is a satisfactory number of those available for purchase — you just need to pick one. Though, the one that I really liked was Rock Sugar 2-in-1 Power Bank. It is more than affordable (around 30$), happens to have 2 USB type A ports and 1 Type C port as well as LED indicator and, of course, wall plug to get some juice onto it. It packs a decent battery of 5000 mAh that is not the best offer ever but definitely should be enough to keep you through the day. After all — you will always be able to charge the Rocket Sugar when necessary. Moreover, you can use it as travel portable charger since it has foldable and interchangeable socket plug ins. Something one can only dream about. I do not know how about you — but I am definitely buying that thing and want to give it a try. Now, with the amount of power banks in my house, I can last without electricity for a month at least. Any ideas about this Rock Sugar 2-in-1 Power Bank? Will it be useful for you? Let me know in the comments!
https://medium.com/yolofreelance/this-is-magical-rock-sugar-2-in-1-power-bank-and-wall-charger-564b3ce4da6b
[]
2019-03-13 04:57:44.928000+00:00
['Charger', 'Smartphones', 'Yolofreelance', 'Slanzer Technology', 'Enhancing']
573
I’m spending Christmas morning with the Apple helpline guy.
I’m spending Christmas morning with the Apple helpline guy. After writing about my daughter’s iPhone addiction, I got blasted by tech lovers: I shouldn’t belittle my daughter’s Apple love; I should share in her enthusiasm and steer her toward more creative tech pursuits. So when her grandparents were at a loss over what to buy their grandchild for Christmas, I — at a loss myself — suggested an iPad and an Apple Pencil. I asked my husband’s goddaughter (a grad student in video game design) for more advice. She recommended art app Procreate. But tech is never simple, at least not for luddites like me. I set up the device, but couldn’t buy the app. Something went wrong. So here I sit, alone, on hold, as the iPad updates, and the support service person has gone eerily silent. Ho Ho Ho!
https://medium.com/modern-parent/im-spending-christmas-morning-with-a-guy-on-the-apple-helpline-5baa30c85555
['Stephanie Gruner Buckley']
2020-12-30 05:01:32.423000+00:00
['iPhone', 'Christmas', 'Technology', 'Addiction', 'Customer Service']
574
Build Slack apps in a flash
Build Slack apps in a flash Introducing the newest member of the Bolt family Illustration and design by Casey Labatt Simon Last April we released Bolt, a JavaScript framework that offers a standardized, high-level interface to simplify and speed up the development of Slack apps. Since then, we’ve seen a remarkable community of developers build with and contribute to Bolt, signaling an appetite for frameworks in other programming languages. Since its initial release in JavaScript, Bolt is also available in Java — and today, in Python. Interested in seeing our latest addition in action? We’re hosting a webinar about building with Bolt for Python later this week. Designing Bolt for simple, custom building In the months leading up to the release of Bolt for JavaScript, our small development team held weekly white-boarding sessions (developing a recursive middleware processor was not as easy as we expected). We pushed hundreds of commits, took countless coffee breaks, and followed the guidance of JavaScript community principles. Developing Bolt for Java and Python, we knew we needed to customize them to best fit each unique language community. As we trekked, we made small modifications to the different frameworks—in Java we modified how we pass in listener arguments, and in Python we adapted Bolt to work with existing web frameworks, like Flask. Our specialized approach was complementary to Bolt’s core design principles. A common listener pattern A common listener pattern simplifies building with all the different platform features Bolt is built around a set of listener methods. These are used to listen and interact with different events coming from Slack. For example, Events API events use the events() listener, and shortcut invocations use the shortcut() listener. All listeners use common parameters that allow you to define unique identifiers, add middleware, and access the body of incoming events. A handful of built-in defaults Built-in OAuth support makes multi-team installation faster and more intuitive Bolt includes a collection of defaults that perform the heavier lifting of building Slack apps. One of these is a design pattern called receivers, or adapters in Python. These separate the concerns of your app’s server and the Bolt framework so updates for server logic doesn’t require framework updates, and vice-versa. Your app has access to a built-in Web API client that includes features such as built-in retry logic, rate limit handling, and pagination support as you make calls to any of our 130+ methods. It offers a simple way to call Web API methods without having to think of all of the possible edge cases. And lastly, Bolt offers OAuth support which handles the “Add to Slack” flow, making token storage and access for multi-team installations simpler. Helper functions and objects The say() helper is available in all listeners that have a conversation context To complete common tasks, Bolt includes a set of helper functions. For example, in any listener with an associated conversation context, there will be say() function that lets your app send a message back into that channel. And for events that need to be acknowledged within 3 seconds, Bolt surfaces an ack() function that streamlines the act of responding. Bolt also offers helpers that make it easier to inspect and pass data through your app. Rather than having to unwrap incoming events to access the most important information, Bolt includes a payload object that is a predictable, unwrapped event (though you’ll still have body for the more verbose event). You can also access context , which is a key/value dictionary that allows you to pass data through middleware and listeners. For example, if you have an internal data store that you want to associate with incoming events, you can create a global middleware function to store that information in context , which will be natively accessible in listeners. The future of Bolt As the platform grows, we will continue our investment in Bolt to make it easier, faster, and more intuitive to build Slack apps. For example, steps from apps are now available in Workflow Builder. Each workflow step has a few associated events, so we collaborated with the Workflow Builder engineering team to design a common pattern in Bolt that lets you centrally handle the entire life cycle of a workflow step. Also, we recently pre-announced Socket Mode, which will improve the experience of deploying apps behind a firewall. When generally available early next year, Bolt apps will gain support for this feature with minimal code changes. We’re also unlocking more Bolt resources for custom use cases— whether that’s specialized hosting environments, simplifying new features, or building scalable apps for enterprise grid and Slack Connect. We’ll continue to expand our collection of Bolt-focused code samples, tutorials, deployment guides, and webinars; and if you need a more specialized approach to building Slack apps, we have ongoing plans for our lower-level SDKs that power Bolt under the hood. Digging into the nuts and bolts You can start building with Bolt using our guides in Python, JavaScript, and Java. If you’re a JavaScript developer, you can read our new hosting guides to get your app up-and-running on Heroku with an equivalent for AWS Lambda coming soon. Want to dive deeper into Bolt for Python? We’re hosting a webinar on November 11, which you can register for today.
https://medium.com/slack-developer-blog/build-slack-apps-in-a-flash-700570619065
['Shay Dewael']
2020-11-09 19:13:15.144000+00:00
['API', 'Technology', 'Python', 'Programming', 'Slack']
575
Mangools SiteProfiler Complete Review in 2021
Mangools SiteProfiler Complete Review in 2021 SiteProfiler is a website analysis tool that offers valuable information about a website’s traffic, its inbound links or backlinks, its competitors, and much more important insights in any organic SEO strategy. This tool is part of the SEO Mangools suite of tools, a platform designed in 2014 to be a powerful, simple and inexpensive alternative to the rest of the SEO suite on the market. Mangools SiteProfiler Complete Review in 2021 What to Expect After Buying SiteProfile? SiteProfiler has a good range of functionalities, although we will highlight some of its key features below. First, the tool allows you to check the SEO strength, popularity and authority of any website and compare it with the competition. Also, the tool can estimate website traffic and the sources it comes from. Another interesting feature of SiteProfiler is that it makes it easier to identify content that generates traffic, as well as to verify the profile of inbound links. You can check if a website is healthy or likely to be considered spam. Lastly, just like other popular platforms like SimilarWeb, SiteProfiler allows you to find competitors or similar websites, to perform a complete analysis and benchmarking. If you want to know more about the functionalities of SiteProfiler, keep reading to know everything about this tool and what has made more than 25,000 users trust Mangools to improve the positioning and visibility of their websites on the internet. Can I just subscribe to SiteProfiler? No, it is not possible. SiteProfiler is part of the SEO Mangools suite of tools, so it is not possible to subscribe to it separately. All the tools in this powerful SEO suite are included in all subscription plans. The tools work with each other and are interconnected. That is how you can get the most out of it. Is there a demo or trial version of SiteProfiler? Of course. Mangools offers a trial to test their entire SEO suite, which includes the SiteProfiler tool. Just by registering on the platform and without adding your debit or credit card, you will have access to a free 10-day trial of Mangools, and all its tools included: SitePRofiler, SERPWatcher, KWFinder, SERPChecker and LinkMiner. Does SiteProfiler offer a money-back policy? Yeah sure! SiteProfiler, like all tools in the Mangools suite, offers a 48-hour money back policy. If for whatever reason, you subscribe to the platform and it does not meet your expectations, you will be entitled to a refund of the money within the first 48 hours. To receive it, it is as simple as contacting the support at [email protected] or in their live chat. Final Thoughts on SiteProfiler SiteProfiler is an SEO analysis tool, which offers all the different metrics grouped on the same site so that you can take a quick look at them. It allows you to analyze the level of authority and trust of any website (it can be yours or your competition’s), thanks to SEO metrics from Moz, Majestic, Alexa Rank and the number of times it has been shared on Facebook. Another function of SiteProfiler is to analyze the link profile of a website. That is, it is capable of detecting if the backlinks of a website are malicious, and can harm your positioning so that you take the appropriate actions. Besides offering you the possibility of finding new competitors similar to those on your website, it also allows you to detect which content from those competitors attracts the most traffic to their websites. So you can get ideas for your project.
https://medium.com/@pchojecki/mangools-siteprofiler-complete-review-in-2021-83cb2aa5f814
['Przemek Chojecki']
2020-12-21 10:42:37.604000+00:00
['Website', 'Technology', 'Marketing', 'SEO', 'Mangools']
576
[S1;Ep8] Secrets of the Zoo: North Carolina (2020) Episode 8 “Running Otter Time”
➕Official Partners “TVs” TV Shows & Movies ● Watch Secrets of the Zoo: North Carolina Season 1 Episode 8 Eng Sub ● Secrets of the Zoo: North Carolina Season 1 Episode 8 : Full Series ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ ஜ ۩۞۩ ஜ▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭▭ஜ ۩۞۩ ஜ Secrets of the Zoo: North Carolina — Season 1, Episode 8 || FULL EPISODES : North Carolina is home to the world’s largest zoo, with 2,600 acres, large natural habitats and more than 1,800 animals in its care. The show features several Zoo staff, including keepers and veterinarians, and highlight stories, including routine animal husbandry, emergency procedures and the Zoo’s work in conservation and rescue and release of injured wildlife . Secrets of the Zoo: North Carolina 1x8 > Secrets of the Zoo: North Carolina S1xE8 > Secrets of the Zoo: North Carolina S1E8 > Secrets of the Zoo: North Carolina TVs > Secrets of the Zoo: North Carolina Cast > Secrets of the Zoo: North Carolina Online > Secrets of the Zoo: North Carolina Eps.1 > Secrets of the Zoo: North Carolina Season 1 > Secrets of the Zoo: North Carolina Episode 8 > Secrets of the Zoo: North Carolina Premiere > Secrets of the Zoo: North Carolina New Season > Secrets of the Zoo: North Carolina Full Episodes > Secrets of the Zoo: North Carolina Season 1 Episode 8 > Watch Secrets of the Zoo: North Carolina Season 1 Episode 8 Online Streaming Secrets of the Zoo: North Carolina Season 1 :: Episode 8 S1E8 ► ((Episode 8 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV Secrets of the Zoo: North Carolina ➤ Let’s go to watch the latest episodes of your favourite Secrets of the Zoo: North Carolina. ❖ P.L.A.Y ► https://tinyurl.com/y935an6x Secrets of the Zoo: North Carolina 1x8 Secrets of the Zoo: North Carolina S1E8 Secrets of the Zoo: North Carolina TVs Secrets of the Zoo: North Carolina Cast Secrets of the Zoo: North Carolina Online Secrets of the Zoo: North Carolina Eps.1 Secrets of the Zoo: North Carolina Season 1 Secrets of the Zoo: North Carolina Episode 8 Secrets of the Zoo: North Carolina Premiere Secrets of the Zoo: North Carolina New Season Secrets of the Zoo: North Carolina Full Episodes Secrets of the Zoo: North Carolina Watch Online Secrets of the Zoo: North Carolina Season 1 Episode 8 Watch Secrets of the Zoo: North Carolina Season 1 Episode 8 Online ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package. ✌ THE STORY ✌ Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing (Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself. ✌ STREAMING MEDIA ✌ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content. Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”. This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation. ✌ TELEVISION SHOW AND HISTORY ✌ A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings. A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video. Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet. The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 1, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRASecrets of the Zoo: North Carolina CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets. ✌ FINAL THOUGHTS ✌ The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature. FIND US: ✔️ https://www.ontvsflix.com/tv/112783-1-8/secrets-of-the-zoo-north-carolina.html ✔️ Instagram: https://instagram.com ✔️ Twitter: https://twitter.com ✔️ Facebook: https://www.facebook.com
https://medium.com/@getsho-rt-y-s-3/s1-ep8-secrets-of-the-zoo-north-carolina-2020-episode-8-running-otter-time-5fd1e65b825c
['Getsho Rt Y S']
2020-12-19 11:42:57.060000+00:00
['Politics', 'Coronavirus', 'Documentary', 'Technology']
577
(Free Global Summit) Implementing “Tech-Work and Open Data” for Everyone.
The idea was to create a synergy between Heritage, Sustainability, Security, Healthcare and Novel Technologies. The Global Summit involves a number of proposals from Italy and around the world and puts the academic and industry’s current issues under the spotlight, specifically on issues related to the research of innovative techniques and technologies for the implementation of “Sustainability, Safety, Security and Healthcare”. The Summit also focuses on methodologies and practices in modeling, performance evaluation and optimization of novel approaches and systems. It brings together researchers from different communities including Computer Science, Networks, Electronics, Operations Research, Optimization, AI or Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI), Control Theory, Geomatics, Internet of Things, Open Source, Manufacturing, NanoTechnologies, Cyber Security, Blockchain applications, Smart Cities, Health Care, Environment, Societal Challenges, Digital Humanities, Security, IT Infrastructure, Manufacturing, Business, etc. Workshops are part of the Global Summit, international events attracting educational institutions and companies and creating a unique opportunity to bring together the academic word with industry, other Educational Institutions, etc. The objectives are also the strengthening of novel knowledge of humanitarian assistance and the development of novel national and international humanitarian strategies. The city of Catania will, therefore, be the international heart of Technological Innovation assets as well as a forum for meeting and discussing for experts, students, and enthusiasts from Italy and around the world. The Summit will be a significant opportunity for exchange between researchers, companies, students, etc. for the promotion of novel multidisciplinary approaches and researches. If you’re passionate about using your skills to tackle real-world problems and want the chance to share ideas, expand your horizons and connect with industry practitioners, professors, students, experts, companies, early career professionals, etc. then make haste, submit an abstract and join us! For further Information, please contact Agata Lo Tauro via LinkedIn https://it.linkedin.com › agata-lo-tauro-6723257b Call for abstracts/papers Scope TECH_WORK 2020 is the 1st edition of the Global Summit on interdisciplinary approaches and researches for Novel Unitarian Researches and Approaches. The Global Summit focuses on a broad range of research challenges in the field of novel technologies. The symposium is also dedicated to fostering interdisciplinary collaborative research in these areas and across a wide spectrum of application domains. This year’s edition will provide an exciting forum for interaction among academic, students and industry researchers together with practitioners from both the simulation community and the different user communities. The Summit will be held in Catania and other cities in the world. Pre-recorded video and/or audio or remote presentation over the Internet will be also guaranteed. The Symposium invites submissions on the topics below in all application areas of novel technologies. Submissions in interdisciplinary areas are especially encouraged (e.g. Computer Science, Networks, Electronics, Operations Research, Optimization, AI or Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI), Control Theory, Geomatics, Internet of Things, Open Sourse, Manufacturing, NanoTechnologies, Cyber Security, Blockchain applications, Smart Cities, Health Care, Environment, Societal Challenges, Digital Humanities, Security, IT Infrastructure, Manufacturing, e.Business, etc). The conference also invites abstracts, papers, posters, technical demonstrations innovative Apps that report on recent results, ongoing research or industrial case studies in the area of analysis.
https://medium.com/@judit.camacho.diaz/free-global-summit-implementing-tech-work-and-open-data-for-everyone-3f2f74862312
['Judit Camacho Diaz']
2020-07-08 17:34:53.682000+00:00
['Summit', 'Open Data', 'New Tech Tools', 'Healthcare Technology']
578
How Tech Bootcamps Are Supporting the Enterprise World
How Tech Bootcamps Are Supporting the Enterprise World Tech schools may have the key success factor for the urgent need for updating the workforce with technology skills Image by vectorjuice available in freepik In 2020, tech bootcamps are not a new topic anymore. They have been around since at least 2012, offering courses in diverse subjects (typically, web development, mobile development, UX/UI design, data science, and project management) and flooding a digital-starved market with new graduates each year. Last year, something changed. The number of corporate training bootcamp graduates has surpassed the number of classic bootcamp graduates. This reveals a new trend, not only in the bootcamp market itself but also in enterprise operations. Companies are acknowledging the benefits of intensive, accelerated learning programs to qualify their workforce for digital roles. The advantages of bootcamps for companies Workforce Reskilling Reskilling: learning new skills for a different job function Digital Transformation is shaping industry after industry. Companies are resorting to technology to improve their business operations, to launch new products and services, and to keep up with their competitors. This situation leads to high demand for specialized tech workers, that it is expected to keep growing for the next years. Currently, even non-technical roles require the use of mobile phones, computers, and the Internet on a daily basis. Therefore, companies face themselves with a large number of long-time professionals with roles that are (or are about to become) obsolete, at the same they have high need for tech specialists who are hard to find and retain in a very competitive market. Since firing long-time workers and hiring tech workers are both expensive processes, some companies are turning to a new solution: providing their workers with the necessary knowledge to take on new roles. Workforce Upskilling Upskilling: learning new skills within the same job function At 13.2%, the technology industry has the highest turnover of any industry. This means enterprises not only face the challenge of hiring new skilled workers, but they also face difficulties in retaining them. Although high-demand and rising compensation seem to be the main reasons for tech turnover, 94% of employees state they would stay longer if the organization was willing to invest in their learning and development. Training programs to turn entry-to-mid-level team members into experts in a specific technology is especially common practice in consulting companies because these firms focus on delivering specific projects for their clients. New Hires The tech sector demands a workforce with very specific skills that change quickly. College programs have trouble following the demands of such a fast-paced market. Students often graduate with general knowledge, rarely matching the specific skill set and expertise level required for a tech role. On the other hand, entry-level developers are cheaper to hire than experts. Therefore, many companies end up hiring professionals with low expertise levels and submit them to training programs in which they acquire the necessary skills, knowledge, and behaviors to quickly become effective contributors to the organization. Image by startup-stock-photos on Pexels Bootcamps vs old training formats Training programs come in several formats: online courses, training on the job, tuition reimbursement, etc. All of these training formats have pros and cons. Bootcamps appear as a valid alternative, offering the following advantages: A big number of employees receive the same training at the same time. The bootcamp assures the teams will receive cohesive knowledge and may also work as a team-building event. Unlike self-paced online courses, bootcamps have planned classes, so the company knows how their employees’ days are structured. Also, we must note that online courses without real-time communication with an instructor may cause some confusion on the part of the employees if a particular point isn’t explained to their satisfaction. Bootcamps have a teaching team exclusively dedicated to helping students overcome any difficulties they may encounter. — a lot of learning to code involves trial & error and spending time bug-fixing — you can get stuck on problems for a long time which can be frustrating and stops people from learning on their own One of the downsides of the bootcamp used to be it required on-site classes, either in the bootcamp cohort or in the company, but after the 2020 pandemic, methods to provide remote classes have been adopted by many bootcamp organizers and are here to stay. Are bootcamps ready to meet the challenge? Bootcamp organizers have identified companies’ need of providing specialized training for their employees. For some time now, many tech schools have offered customizable corporate training programs and many enterprises already trust them. As of December 2020, Course Report maintains a list of more than 50 tech schools that already provide corporate B2B bootcamps all over the world. Are companies willing to accept bootcamps as a solution? Companies already have acknowledged the rapid pace of business and technology advance is something they have to deal with. They are invested in understanding what skills will be most in-demand in upcoming years. Reskilling employees requires investment. However, the alternative to letting go of long-term employees and finding new workers with in-demand skills may not be cheaper or easier. Therefore, reskilling is an attractive option and many business leaders are leveraging training programs for filling key roles. We know bootcamps are already considered among old training formats: last year, more than 22,000 workers have acquired digital skills via corporate bootcamps. What to expect in the future? It is impossible for a bootcamp to replace long-term education offered by college degrees. However, universities have acknowledged the advantages of accelerated education programs and have started to launch their own bootcamps. In some cases, these programs are created through partnerships with established bootcamp schools but adapted to the college’s educational purposes and student needs. By participating, students develop projects to add to their portfolios and get an additional learning experience in their curriculum. Companies also have been taken the initiative of creating their own programs. Google, for example, offers its own digital marketing bootcamp and also partnered with General Assembly school to create an Android intensive course. By building the curriculum and contents, companies make sure the graduates of these courses have acquired the knowledge they are looking for. Summing up, bootcamp’s quick and intensive education method earned its early reputation through a business-to-consumer model, but, in the most recent years, it's gaining relevant traction through business-to-business solutions. Companies, and even universities, already took the step of trusting in this type of education to prepare the current and next generation of workforces for the fast-paced digital World challenges. Thanks to Ricardo Silva Related Articles: Mariana Vargas is a UX Engineer based in Lisbon, Portugal. In 2019, she was part of the teaching team in the first Ironhack’s B2B Bootcamp: a training program for MediaMarkt in Ingolstadt, Germany. This Bootcamp re-skilled more than 20 white-collar workers to assume developer positions. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/how-tech-bootcamps-are-supporting-the-enterprise-world-cb5fa076442
['Mariana Vargas']
2020-12-25 09:08:51.845000+00:00
['Programming', 'Technology', 'JavaScript', 'Business', 'Software Engineering']
579
The One Year Plan For Cracking Coding Interviews
The One Year Plan For Cracking Coding Interviews About my hustle before cracking interviews. It took me one year to go from a noob programmer to someone decent enough to crack coding interviews for getting internships and gaining experience. I still have a long way to go, but the first step to being a good programmer is working in the real world and getting experience, which can be best gained by internships. And if you want an internship, you have to crack the interview first. Which brings us to this blog. Photo by Jordan Whitfield on Unsplash I have broken down my one-year plan, which I diligently followed, and will hopefully help you with your planning if you are in the starting stage. Prerequisite: Knowing the basics and syntax of one programming language. Most students tend to know Java, C, or Python from their colleges/highschools. You can stick to the one you are comfortable with from these three, but if C is your preferred language, I would recommend you to switch to C++. My first language was C, which made me switch to C++. I learned Java on the side, enjoyed it more, and decided to practice competitive coding in Java, and so every interview I have ever cracked was by using Java. I had zero experience in python, but after joining Facebook, all of the code I have written as an intern is in Python. So my point is, there is no superior language amongst these three, try not to worry about which one to choose. Just pick one, crack interviews in that one, and you can learn the rest on the go depending on where you get placed. Here’s the plan: The month-specific blogs that are released so far have been linked below, and the rest are coming soon. Month 1: Big O, Arrays and Strings: Read it here Month 2: Linked Lists: Read it here Month 3: Stacks and Queues: Read it here Month 4: Trees and Tries: Read it here Month 5: Hashmap, Dictionary, HashSet Month 5: Graphs Month 6: Recursion and Dynamic Programming Month 7: Sorting and Searching Month 8: Reading(about system design, scalability, PM questions, OS, threads, locks, security basics, garbage collection, etc. basically expanding your knowledge in whatever field required, depending on your target role) Month 9, 10, 11, 12: A mix of medium and hard questions in your preferred website. Practice by participating in contests, focusing on topics that you are weak at, mock interviews, etc. Source — forbes.com Here’s how I approach every topic in each month — Let’s say you are in month 4, and focusing on trees. You need to first understand what trees are, different types of trees, and be able to define class Node and Tree. You then need to be able to perform basic operations like adding, finding, and deleting an element, pre-order, in-order, post-order, and level-by-level traversal. Lastly, you practice different tree questions available on Hackerrank, Leetcode, or a website of your choice. You should target the easy questions first, and once you are comfortable, move on to medium and hard. The last 4 months are for solving a mix of different questions, via contests or otherwise, which is necessary because when you are practicing tree questions, you know you have to use a tree. But if you are given a random question, how will you know a tree would be the best approach? Also, always look for the most optimal solution in forums after solving it yourself. You have an entire month, and if you manage to dedicate 40–70 hours a week, you’ll be able to master trees in such a way that if a tree question is thrown at you in an interview, you’ll be able to mostly solve it since you trained your mind to think that way with intense practice. If you are a student, dedicating this much time is definitely doable, even with side projects, homework, etc. Your grades might take a hit (my As became Bs in that one semester(month 9,10, 11, 12) when I was dedicating over 8 hours a day to competitive coding) but it was worth it. You should also try to build projects or do research on the side while preparing. Some people learn better by participating in contests in CodeForces, CodeChef, etc. while others prefer practicing questions. Again, there is no benefit of one over the other, do what you personally prefer. I do not believe in practicing particular topics for a particular company, some websites claim to have a set of questions dedicated to a particular company, eg: cracking the Google interview. I think the goal should be to be a better developer overall, focusing on just a few topics that Google tends to test candidates on may not be the best way to follow. Interviewers also judge you based on your LinkedIn, Resume, past experiences, courses taken, Github, degrees and certifications, projects, research papers, etc. Practicing competitive coding does not guarantee a job, but it does guarantee you’ll be able to crack technical interview rounds most of the time, and you’ll also be a better developer overall, which might help you when you build projects. Lastly, don’t stop. It may seem easy at first when you are motivated, but that fuel dies in a month or so. Keep your goal in mind, of course, it’s going to be hard, but the only ones who make it are those who stick to the plan. You can edit the plan if you need to, but once done, stick to it, even on your lazy days, even when you have a college fest or a party to attend, even when you are sleepy. Like I said, the ones who succeed are the ones who *stick to the plan*. This sums up my schedule at a high level. I plan on digging deep, and my next blog will only focus on month 1(Big O, Arrays and strings), the one after that will be month 2, and so on. I hope this was helpful, let me know if you want me to also write about any other topic on the side, or if you have any queries. I’d appreciate it if you could ask your questions on Instagram since I prefer to keep LinkedIn for professional opportunities, but either is fine. Thanks! Signing off! Anjali Viramgama Incoming Software Developer at Microsoft LinkedIn | Instagram
https://towardsdatascience.com/the-one-year-plan-for-competitive-coding-6af53f2f719c
['Anjali Viramgama']
2020-12-13 21:58:57.485000+00:00
['Competitive Programming', 'Google', 'Facebook', 'Coding', 'Technology']
580
I want my therapy dog to have a heartbeat
As much as I love the people, there is something about a warm, fluffy dog that leaves me fussing with fur on my clothes and putting his (or her) cold nose near me that warms my heart in a different way. Some women believe diamonds are a girl’s best friend. For ladies like me, get a refund on the jewelry; dogs will win me over every single time. Do robotic therapy dogs have the same appeal for kids and adults? In a recent post on Science Daily, robotic therapy dogs have grown more advanced and are a hit at a secondary elementary school. Clearly this isn’t the kind of robotic dog that you would see on “The Jetsons.” (‘Lectronimo was no match for Astro.) Today’s robotic dogs are warmer, friendlier and cuter than the four-legged hunk of metal seen in the early days. But I still cannot wrap my mind around paying $130 for a companion robotic pup when I can just pay for the vaccines and have a real pup to steal my yoga mat and snatch food off my kitchen counter. But for kids, this robotic dog was exciting nonetheless. “Despite the children reporting they significantly preferred the session with the living dog, overall enjoyment was high and they actually expressed more positive emotions following interaction with the robot,” according to the post. “The more the children attributed mental states and sentience to the dog and robot, the more they enjoyed the sessions.” While I definitely do see the perks of using interactive robotic animals as an alternative, especially for people with pet allergies, I’m not sold on this idea. Unless you live in a condo or apartment where dogs aren’t allowed, I would think that part of the therapeutic appeal of dog ownership or visits is having to learn the ways of a real dog. For dog lovers who don’t have the time, money or energy to take care of a breathing, lively pup, I get it. Therapy dogs pop up and disappear as needed. For the purpose of hospitals and school visits, I fully understand why it may be easier to just use a robotic therapy dog. But there are some parts of technology that I simply have no interest in seeing the advancements of. As an adult who was once a child who was completely unimpressed by puppies and a bit afraid of dogs, I could easily see why a robotic dog would be less trouble. But the responsibility factor of learning how to take care of a dog as a child all the way up to senior years remains undefeated. For me, that is all the therapy I need — be it an hour and a half with my mother or 22 years of my life. And judging from some of the stroke survivors in attendance, they’d rather see a dog with a heartbeat, too.
https://medium.com/doggone-world/i-want-my-therapy-dog-to-have-a-heartbeat-6595a3185e04
['Shamontiel L. Vaughn']
2020-12-14 22:10:52.266000+00:00
['Pets', 'Robots', 'Dogs', 'Technology', 'Therapy Dogs']
581
A New Planetary Perspective
We’re UP42 and we care about bringing knowledge about our planet into the hands of those that need it. That’s why we’ve spent the past year working on a way to make planetary data, or geospatial data easy to find, analyze, and use. Why? By going up, we can look down. Our planet’s palette of changing colors, textures, and shapes — tells a story of land, wind, water, ice, and air — as they can only be viewed from above. Earth holds the answers to many of its problems. Where can we find those answers? Geospatial data. There are many types of geospatial sources that are rich with knowledge from different perspectives. From constellations of satellites circling the planet in space, to drones soaring overhead, and IoT connected devices dotted across the globe — rich sources of geospatial data help us to understand how Earth was, is, and will be. However, accessing and using geospatial data has (until now) been complicated, expensive, and time-consuming. Algorithms were a closely guarded secret — making analyzing the data once you find it — another hurdle to jump over. With UP42, we’re changing things. UP42 opens up access to what previously was out of reach. We’re sharing the knowledge and tools to study the world, with the world. What is UP42? UP42 is a platform to access and analyze portions of the planet at scale. We bring together multiple geospatial data sources, alongside processing algorithms — empowering you to discover the planetary insights you need. Through UP42, you can find the kind of data that you need, select what you’d like to study and pinpoint your chosen corner of the world. Three simple steps, all in one place. We’re a marketplace too — bringing together partners that have the data and algorithms that you’re looking for. We believe that by bringing people together, we can bring information together, and create change together. From observing urban growth and land use to monitoring deforestation and ice thickness — we are committed to helping people access planetary insights that are tailored to their needs. Whether you’re a large business owner interested in analyzing volumes of satellite data at scale, or a researcher looking at smaller portions of the planet for a specific cause, we can help. What’s next? It’s been almost three months since our beta launch and with just one month until we go public, we’re launching this blog to share more about the wonders of geospatial insights. We’ll be shedding light on how making geospatial data easy to use sparks innovations in industries far and wide. We’ll be introducing the people behind the data and algorithms, sharing opinion pieces from our team, and discussing why geospatial insights are good for businesses, people, and the planet. In a time when the planet is undergoing rapid change, it’s crucial to not only see this change but to understand it too. We’re here to make that easy. UP42. A new planetary perspective.
https://medium.com/up42/a-new-planetary-perspective-e6b076e3238f
['Nikita Marwaha']
2019-07-24 18:52:49.982000+00:00
['Company', 'Technology', 'Environment', 'Satellite', 'Geospatial']
582
Mustard — From Monasteries to Mad Money
By Elevate on Unsplash I regularly drive down to one of adjacent towns near the west side of Madison whenever my wife and I are craving Chinese food. There’s a wonderful place run by a Chinese painter and restaurateur, and I’m always excited to chat with her about the latest and greatest in her artistic life. After I give her my order, I like to walk outside and look around the little plaza with all its cute shops and interesting cafes. But one thing always stands out to me on these take-out excursions. Just across the street is a mustard museum. The National Mustard Museum. Really. Mustard and the Red Sox The National Mustard Museum is the brain child of former Assistant Attorney General of Wisconsin, Barry Levenson. The inspiration for the museum came after a World Series loss by the Red Sox in ’86, which left Levenson in tears. Like many of us, he went on a grocery run to find solace. While pushing his cart aimlessly through the aisles, he saw it — his newfound inspiration — an endless wall full of mustard jars. He decided he needed a new hobby, and after purchasing several of the more obscure brands of mustards, he set out to collect all the mustards of the world. By Fancycrave at Unsplash By 1987, Levenson’s new hobby became so consuming that he once saw a new variety of mustard laying out on a leftover tray in the hotel hallway. He was on his way to argue a case in front of the Supreme Court, and didn’t have enough time to return to his room, so he stowed the unopened jar in his pants. He won the case and may be the only lawyer to have done so with a jar of mustard in his pocket. It was clear that he needed to make the career move to full-time mustard aficionado. Today, the museum features over 6,090 mustards from around the world with 5,624 jars of prepared mustard on display. On the first Saturday of August, the museum celebrates National Mustard Day every year with fundraising efforts and community activities. And in collaboration with the Madison Area Technical College, the museum also hosts the prestigious World-Wide Mustard Competition, where hundreds of mustards from around the world are entered and judged by chefs, food writers, and other food professionals in 16 categories. Three awards are given to the best of each category and a Grand Champion award is reserved for the best of the best. The competition is held annually as well and is currently in its 25th year.
https://bryanquocle.medium.com/mustard-from-monasteries-to-mad-money-19ce826debec
['Bryan Quoc Le']
2018-11-06 14:49:35.569000+00:00
['Business', 'Food', 'Science', 'Technology', 'History']
583
Technology, Psychedelics , Bitcoin and Culture.
Technology, Psychedelics , Bitcoin and Culture. Meditation- taking a step back and observing your mind, without judgement.Becoming aware of your mind. Can bring up repressed trauma. Therapeutic rewiring Psychedelics- effectively enhanced mental activity, breaking down the concepts and cognition we have evolved and with which we understand and navigate the world. Destroying the old so that the new can emerge. Trying to ‘see’ your mind with a cognition that has not quite yet reached the necessary stage of evolution to do that. Fast, intense rewiring. Profound learning Steep learning curves- A viscerally felt rewiring , making new connections between seemingly unrelated concepts, new meaning, learning. Technology- has profoundly impacted our psychological evolution since the first stone tools were used. Each instantiation of new tech has facilitated the ever increasing complexity of our psychology and thus how we organize our societies, communities, civilizations. Bitcoin is the next iteration in the evolution of the technology of money. Money is a technology that facilitates cooperation between people. Understanding Bitcoin necessitates understanding money, thus facilitating awareness of behavior. Old concepts of money are deconstructed and re-assimilated, re-defining how value is denoted, even what value is (is value a specific kind of meaning?). Over time, the impact of technology on our behavior and thus culture can be compared to that of a psychedelic experience and steep learning curve, the destruction of old concepts for the emergence of new, and the connecting of seemingly unrelated domains and ideas, rewiring to incubate new meaning, inspire new behavior and insight. This is a steep learning curve (The rabbit hole). Each major new technology has resulted in disruption then settlement and new culture formed. Technological evolution describes an exponential growth curve. The time between settlement and next new tech is shorter and shorter. Like a pulse of change on the timeline of human history. Each pulse is closer to the last and thus seems stronger. Much like a psychedelic trip. I guess you could call it the evolution of consciousness if we couch it in a global and (human) cultural context. This post was prompted by a query to a reply I posted on twitter in response to a tweet from Tim Denning.
https://medium.com/@nishad.kala/technology-psychedelics-bitcoin-and-culture-5d8601f64e01
['Nishad Kala']
2021-09-05 11:59:42.914000+00:00
['Technology', 'Bitcoin', 'Philosophy', 'Evolution', 'Psychedelics']
584
Coding Tip: Try to Code Without If-statements
When I teach beginners to program and present them with code challenges, one of my favorite follow-up challenges is: Now solve the same problem without using if-statements (or ternary operators, or switch statements). You might ask why would that be helpful? Well, I think this challenge forces your brain to think differently and in some cases, the different solution might be better. There is nothing wrong with using if-statements, but avoiding them can sometimes make the code a bit more readable to humans. This is definitely not a general rule as sometimes avoiding if-statements will make the code a lot less readable. You be the judge. Avoiding if-statements is not just about readability. There is some science behind the concept. As the challenges below will show, not using if-statements gets you closer to the code-as-data concept, which opens the door for unique capabilities like modifying the code as it is being executed! In all cases, it is always fun to try and solve a coding challenge without the use of any conditionals. Here are some example challenges with their if-based solutions and if-less solutions. All solutions are written in JavaScript. Tell me which solutions do you think are more readable. Challenge #1: Count the odd integers in an array Let’s say we have an array of integers and we want to count how many of these integers are odd. Here is an example to test with: const arrayOfIntegers = [1, 4, 5, 9, 0, -1, 5]; Here is a solution using an if-statement: let counter = 0; arrayOfIntegers.forEach((integer) => { const remainder = Math.abs(integer % 2); if (remainder === 1) { counter++; } }); console.log(counter); Here is a solution without if-statements: let counter = 0; arrayOfIntegers.forEach((integer) => { const remainder = Math.abs(integer % 2); counter += remainder; }); console.log(counter); Note: the examples above use forEach and mutate the counter variable. It is cleaner and safer to use immutable methods instead. However, this article is not about that topic. Stay tuned for more coding tips articles that will cover immutability and functional programming. Also note, as pointed out by David Nagli, that the if-less solution would only work for integers while the if-based solution has the advantage of handling any number including decimal ones. In the if-less solution, we are taking advantage of the fact that the result of modulus 2 operation is always either 1 (for odd) or 0 (for even). This result is data. We just used that data directly. What about counting even integers? Can you think of a way to do that without using an if-statement? Challenge #2: The weekendOrWeekday function Write a function that takes a date object argument (like new Date() ) and returns the string “weekend” or “weekday”. Here is a solution using an if-statement: const weekendOrWeekday = (inputDate) => { const day = inputDate.getDay(); if (day === 0 || day === 6) { return 'weekend'; } return 'weekday'; // Or, for ternary fans: // return (day === 0 || day === 6) ? 'weekend' : 'weekday'; }; console.log(weekendOrWeekday(new Date())); Here is a solution without if-statements: const weekendOrWeekday = (inputDate) => { const day = inputDate.getDay(); return weekendOrWeekday.labels[day] || weekendOrWeekday.labels['default']; }; weekendOrWeekday.labels = { 0: 'weekend', 6: 'weekend', default: 'weekday' }; console.log(weekendOrWeekday(new Date())); Did you notice that the if-statement condition has some data in it? It tells us which days are weekends. What we did to avoid the if-statement is extract that data into an object and use that object directly. That also enables us to store that data on higher generic levels. Update: Harald Niesche pointed out correctly that the || trick above is technically a conditional thing. I’ve opted to use it so that I don’t repeat “weekday” five times. We can easily get rid of it if we put the whole 7 days in the labels object. Challenge #3: The doubler function (here be dragons) Write the doubler function which, based on the type of its input, would do the following: When the input is a number, it doubles it (i.e. 5 => 10 , -10 => -20 ). , ). When the input is a string, it repeats every letter (i.e. 'hello' => 'hheelloo' ). ). When the input is a function, it calls it twice. When the input is an array, it calls itself on all elements of that array. When the input is an object, it calls itself on all the values of that object. Here is a solution using a switch-statement: const doubler = (input) => { switch (typeof input) { case 'number': return input + input; case 'string': return input .split('') .map((letter) => letter + letter) .join(''); case 'object': Object.keys(input) .map((key) => (input[key] = doubler(input[key]))); return input; case 'function': input(); input(); } }; console.log(doubler(-10)); console.log(doubler('hey')); console.log(doubler([5, 'hello'])); console.log(doubler({ a: 5, b: 'hello' })); console.log( doubler(function() { console.log('call-me'); }), ); Here is a solution without a switch-statement: const doubler = (input) => { return doubler.operationsByType[typeof input](input); }; doubler.operationsByType = { number: (input) => input + input, string: (input) => input .split('') .map((letter) => letter + letter) .join(''), function: (input) => { input(); input(); }, object: (input) => { Object.keys(input) .map((key) => (input[key] = doubler(input[key]))); return input; }, }; console.log(doubler(-10)); console.log(doubler('hey')); console.log(doubler([5, 'hello'])); console.log(doubler({ a: 5, b: 'hello' })); console.log( doubler(function() { console.log('call-me'); }), ); Again, notice how the data (which operations should be done for which input type) was all extracted out of the switch statement into an object. Then the object was used to pick the right operation and invoke it with the original input.
https://medium.com/edge-coders/coding-tip-try-to-code-without-if-statements-d06799eed231
['Samer Buna']
2019-02-25 19:27:23.432000+00:00
['Tips', 'JavaScript', 'Coding', 'Functional Programming', 'Technology']
585
3 Diet Strategies for the Office
3 Diet Strategies for the Office The weather is becoming warmer in San Francisco, and it’s the time of the year where everyone thinks a little more about their diet! I wanted to share with you 3 honest and practical diet strategies for the office to help you be more aware of your diet. I found these diet strategies useful for when I worked in public & private accounting and I’m confident that you’ll find these resources useful. 1. Prepare your meals In my experience, I see that most diets fail due to either a lack of motivation, not being disciplined enough to sustain it, or having zero structure. That’s why prepping your meals before work is a great way to combat the urges to eat out during lunch. You can either prep your meals for the week or every few days; whatever is most convenient for you. To maintain food freshness, I prep my meals always a day or two in advanced. By doing this, you will know exactly what you’re eating and what to expect. It prevents you from splurging on cupcakes that a coworker brings to the office because you have now invested time into preparing your meals. In addition to prepping your meals, it’s helpful to have snacks planned as well! In your office, you’ll probably have a kitchen stored with goodies and snacks. I could imagine the temptation to snack on something every-time you walk to the kitchen to fill up coffee. I am guilty of this! After filling my coffee, I would pour myself a serving (ok a little more than a serving) of Chex Mix. It happens to everyone. However, there are ways to combat these tendencies. On your next trip to the grocery store, be on the lookout for individually packed snacks. These are the type of snacks you will WANT to bring to the office and have stored in your desk because the individual portion sizes are already measured out for you. “prepping your meals before work is a great way to combat the urges to eat out during lunch.” 2. The Mental Preparation Ask yourself: Are you fully committed? When a coworker offers you to have a homemade chocolate caramel cupcake with chocolate icing it’s a little uncomfortable to just say “no thank you”. There isn’t an easy answer for this. At some point you just have to speak up and say no. It’s a scary step at first, but the more you say no to foods that don’t suit your diet, the more your coworkers will start to realize that you’re pretty serious. I was totally that girl in the restaurant that would order a giant salad with dressing on the side, while my coworkers enjoyed any dish they pleased with appetizers. There were some times that were pretty difficult to sit through, but if you take ownership in your goals, being surrounded with food choices that doesn’t fit your diet does become easier. 3. Tracking Macro-nutrients This can be an entire article of its own, but I wanted to briefly talk about the importance of tracking macro-nutrients. At the end of the day, your body composition is dependent on your diet’s breakdown of calories and macro-nutrients.. Everyone has different caloric/macro needs, so it’s best if you consult with a fitness coach to discuss their recommendations. Before every meal, I track my food using myfitnesspal, an online free calorie counter diary. The food is weighed out by using a food scale, to ensure accuracy. After completing my daily food entry, it provides me with a summary of carbs, fats, and protein consumed. I don’t recommend you follow any diet you find online exactly. You can follow a similar process, but macros are dependent on the individual’s needs. There are three primary macro-nutrients that food is made up from: Carbohydrates = 4 Calories per gram Protein = 4 Calories per gram Fats = 9 Calories per gram * In case you’re wondering alcohol = 7 calories per gram For those that are new to tracking macro-nutrients, it’s important for you to actually measure out food to see how much you are actually consuming in a day. This may be an eye-opener for you because you’ll be surprised how the numbers add up, but it’s helpful information to know. You can start off by purchasing an electronic food scale from Target, Walmart, or Amazon. One of my favorite things about tracking macro-nutrients, is that it allows me to fit in “cheat” foods as part of my regular diet. For example, if I have hit my required fats and proteins for the day, and I still have some room for some extra carbs in my calorie budget, I’ll gladly pour myself a bowl of my favorite cereal. The key here, is PORTION CONTROL. “A sample of my macronutrient breakdown for a day. Remember, I am a 126lb 5'2 female, so this is specific to my current needs.” “🔔 If you have any questions, feel free to reach for a free 60-minute consultation [email protected] or reach out to us on https://strongstyleathletics.com/contact-us-2/" Written by: Rachel Bitz US Accounting Partner
https://medium.com/accodex-partners/3-diet-strategies-for-the-office-f49ac7bc3d45
['Accodex Partners']
2019-06-17 05:54:14.150000+00:00
['Accounting', 'Compliance', 'Weight Loss', 'Technology', 'Bookkeeping']
586
The Process to Deliver a Software or Web Application Into Production
I have worked for different companies which allowed me to experience different strategies when it comes to take an idea and release it to the public as a final product. Whether you are part of a small or big project, I believe any developer would benefit a lot from understanding the entire process of what it takes to deliver software or application. Analysis The first thing that triggers a product or software development is the idea or a solution for a specific problem. After that, you normally look at the market for existing solutions if any. If you find one you take a closer look at them to see if they lack something you can contribute to or if you can just go ahead and build something better to compete in the market. I call this part “research the idea” and in a software development cycle it is called analysis and it is the step where you define the scope and the project itself. You measure the risks, define a timeline that will take you to the final result, define or anticipate issues or opportunities, and plan for things as well as come up with the requirements for the project. This step may even determine if the project should go forward or not as well as how. Documentation This phase is to come up with a way to document the project solution and requirements and it must contain everything needed during development and provide few checks: Economic, Legal, Operations, Technical, and Schedule, more or less. You must define the costs related to the development of the project, go over copyright and patents to avoid any legal conflict around the idea and product. The delivery schedule is a big one especially if you have a sales, marketing, and social media team and they need to create content and be ready for launch to promote the product. Design The design phase is not just about designing the interface of the product itself but anything related to it. It can be the overall system architecture, how the data look like, where it will be stored, and how the data flows in the system. You also define the functionality of each module and all related logic as well as how these modules talk to each other and yes, you also design the interface of the software as well. Coding After the design phase is the coding phase where you analyze the idea the documentation, requirements, and specifications and start coding things by following a coding plan, the product schedule, timeline, and roadmap. Anything that turns out to be more complex and deviates from the original plan should be communicated. Things may change as a result. I often see plan B being applied where you find an MVP version of the feature or the delivery and implement that and come back to it later on where you further improve the feature after much detailed research. The show must go on and it is hard to remove a wagon after a train is in motion. The coding is done maybe by following an agile development model where features are delivered in sprints, are planned in sprint plannings, and get daily Engineer updates in daily stand-ups. The development team keeps a backlog of features and bugs to distribute among them and address them per sprint which usually takes 2 weeks. Testing When the code is done it is the testing phase. I am not talking about unit tests as those should happen during the coding phase whether you use a Test Driven Development technique or not. The test phase is for QA and E2E(sometimes). These tests do not happen after 100% of the things are coded. They happen as different parts are completed. Anything that is found to be faulty or deserves improvement is sent back to be fixed by the engineers. The goal is to not introduce new features but to check that what was coded follows the requirements and does what it is supposed to do. The E2E is created to automate the user flow in a step by step pattern to mimic how the user would use the product. Deployment If everything is coded, tested, and seems to be right it then gets deployed but it does not mean that developers and testers job is complete. The QA then tests things in production as the production and development environments are different and again, anything found to be broken is also sent to be fixed by the developers. At this point, the user will start interacting with the product and sometimes things come up and this is when customer support comes in. These people understand how the product works because they got trained as things were being built or at the end. These people will guide the users, through the product in case of any problem or if the user is stuck in some issue that is preventing them from using the product where anything perceived to be a problem is turned into issues that are sent to the developer’s backlog to be checked by the engineering team and gets fixed if necessary. The customer support may even be the developers as well. Some companies use the concept of having developers “on-call” for any user-related issues. Normally small companies do that and these engineers stay on call even during non-business hours. Maintenance After launch, there is the maintenance phase, the final phase of the cycle. This phase includes bug fixes like those reported after launch, software upgrades, and any new feature enhancements. The development cycle is circular so if any new thing, version, or complex update needs to be done it goes from phase 1 again until it is delivered. Observation One thing to notice is that the coding phase is often small. There is a lot of planning and support time dedicated to delivering a product. I worked in companies where we took 2 and a half years to deliver a product as well as others that took 3, 6, or 9 months depending on the product type. No matter the time it takes to deliver software, they all follow or try to follow a software development cycle. Red Flag A red flag would be a place where the coding time is the largest phase and normally these are startups that experiment, test, and come up with requirements as things are being coded and designed. These environments tend to be very stressful to work at as things may change as you are coding them, meaning, you start a sprint with a set of requirements and by the end of the sprint the design and requirements may change which may mean that the developers need to allocate extra time to address these changes. Conclusion It should not matter the size of the project, whether it is a side project or a freelance project. You should always try to follow a plan and get good it. The steps will narrow your focus and allow you to deliver a product in junks which will keep you on track and satisfied as you go. I implement these steps fully or partially in my deliveries which allows me to finish a side project, give a detailed plan, pricing, and timeline for a freelance project client, communicate well with the VP, managers, and project owners at work. Watch me code and create things by visiting beforesemicolon.com or subscribing to my YouTube channel.
https://beforesemicolon.medium.com/how-to-deliver-a-software-or-web-application-into-production-4bf309be4493
['Before Semicolon']
2020-12-27 18:28:23.396000+00:00
['JavaScript', 'Software Architecture', 'Technology', 'Software Engineering', 'Programming']
587
A global homeownership crisis is unfolding in front of our eyes.
It is time to act now. We are faced with an unprecedented homeownership crisis across Europe. Rising residential real estate prices have made it nearly impossible for young adults to become homeowners. Millennials are forced to rent much longer than previous generations and have been dubbed as ‘Generation Rent‘. Accumulated student loan debt, higher living expenses, falling real incomes, and continuously rising real estate prices, fuelled by low-interest rates, have made it significantly harder to save up for a downpayment. Despite favourable interest rates, consumers do not necessarily benefit due to restrictive lending, causing them to fall even further behind and missing their chance to take the crucial first step on the property ladder. Buying a home is, for most of us, by far the most significant financial decision of our lifetime. It requires us to have saved up a sufficiently large downpayment which covers at least the transaction costs, have a regular income to pay for the monthly mortgage payments, and additionally requires us to have a positive outlook on the housing market. Preferably, you want to be confident that you are making the right decision when you buy a home. However, life is fluid and uncertain, sometimes even confronted with external shocks, just like COVID-19. It, therefore, becomes even more important to leave no one behind and that everyone receives a fair chance to become a homeowner. With propifair we want to rectify the biggest challenge of our lifetime, making homeownership accessible and affordable for everyone. We want to offer young adults, in particular young families, a fair, flexible and realistic chance to take the first step on the property ladder. We are creating a middle ground between renting and buying. You should not be forced to wait to take the first step on the property ladder; you should not put yourself in debt if your current financial situation doesn’t allow it. Instead, you deserve to have a clear path to homeownership. You deserve to decide when it is the right time to buy, without worrying that real estate prices have skyrocketed in the meantime. To turn our vision into reality, we will go live in January. Let us change homeownership for good. Together.
https://medium.com/propifair/a-global-homeownership-crisis-is-unfolding-in-front-of-our-eyes-5c9576df126d
['Thore Behrens']
2020-12-14 14:45:47.396000+00:00
['Startup', 'Homeownership', 'Real Estate', 'Technology', 'Millennials']
588
Everything EOS Podcast #3: Major Announcements, Latest Partnerships, and Dawn 3.0
Everything EOS Podcast #3: Major Announcements, Latest Partnerships, and Dawn 3.0 A recap of the April 6 EOS Live Stream from Hong Kong and Other Developments Disclaimer: ICO Alert does not endorse or recommend participating in any initial coin offerings, including EOS. ICO Alert does not receive any compensation for the Everything EOS Podcast from Block.One or EOS, however, Robert Finch and Zack Gall do both personally own EOS tokens. Episode #3 Summary: In today’s episode we cover the topics below: Recap of the April 6 EOS Meet Up in Hong Kong (Watch LiveStream) EOS Global Hackathon Announcement and Details (Read More) Another VC Partner (EOS Global) Announced to Fund $200m Towards Asia-focused EOS development. (Read More) Dawn 3.0, the first feature-complete pre-release of EOS officially released. (Read More) Inter-Blockchain Communication and Lost Password Recovery on EOS. “Usability, scalability and stabilization are essential to an operating system like a general purpose blockchain platform. Non-developers must be able to seamlessly interact with and use blockchain applications, while scalability is fundamental to the future growth of the blockchain ecosystem. The stabilization features we have introduced are key to ensuring resources can be utilized effectively across the network.” — Daniel Larimer, CTO of Block.One About EOS Scalable *Support thousands of Commercial Scale DApps *Parallel Execution *Asynchronous Communication *Separates Authentication from Execution *Support thousands of Commercial Scale DApps *Parallel Execution *Asynchronous Communication *Separates Authentication from Execution Flexible *Freeze and Fix Broken Applications *Generalized Role-based Permissions *Web Assembly *Freeze and Fix Broken Applications *Generalized Role-based Permissions *Web Assembly Usable *Web Toolkit for Interface Development *Self Describing Interfaces *Self Describing Database Schemes *Declarative Permission Scheme *Web Toolkit for Interface Development *Self Describing Interfaces *Self Describing Database Schemes *Declarative Permission Scheme Equal Opportunity *To ensure inclusivity, EOS Tokens have no pre-determined price; rather price is set by market demand. This mimics mining without giving potential unfair advantages to large purchasers. *To ensure inclusivity, EOS Tokens have no pre-determined price; rather price is set by market demand. This mimics mining without giving potential unfair advantages to large purchasers. Broad Distribution *The EOS Token distribution takes place over 341 days which is expected to provide ample time for the community to familiarize themselves with the project, as well as participate in the distribution. *The EOS Token distribution takes place over 341 days which is expected to provide ample time for the community to familiarize themselves with the project, as well as participate in the distribution. Transparency *An Ethereum smart contract proves receipt of incoming funds for EOS Tokens. Useful Links and References: The Everything EOS Podcast showcases discussion about the EOS.IO blockchain. Each week the podcast spotlights a specific topic which may include reviews of new DApps, major announcements, VC Partners, and more leading up to the launch of the EOS mainet in June 2018 and beyond. The podcast is hosted by Zack Gall, a data research analyst at ICO Alert, and Robert Finch, the Founder of ICO Alert. New episodes are released weekly.
https://medium.com/ico-alert/everything-eos-podcast-3-major-announcements-latest-partnerships-and-dawn-3-0-e871f4cb33f
['Zack Gall']
2018-04-09 20:38:56.365000+00:00
['Technology', 'Venture Capital', 'Podcast', 'Blockchain', 'Hackathons']
589
Outdated IoT Assumptions and Misconceptions about Remote Monitoring
Outdated IoT Assumptions and Misconceptions about Remote Monitoring New technology is making preventative and predictive maintenance easier than ever. New technology is making preventative and predictive maintenance easier than ever. But industry mindsets that haven’t kept pace with technology can be a huge roadblock for service teams focused on attracting-and retaining-customers. In this article, you’ll learn: What preventative maintenance looks like-both for you and for your customers How the Internet of Things (IoT) products have impacted remote monitoring for service New ways to decide if an IoT platform is the right fit for your service ecosystem Table of contents Remote Monitoring Takes Too Long to Create Value-and When It Does, the ROI is Minimal Only the Service Team Will Benefit From Remote Monitoring We Already Have All the Data We Need Remote Monitoring Will Be a Burden On Our IT Team-and We Can Do It Better In-house IoT is Too risky. We Can Wait Until It’s More Established The definition of “service excellence” is an ever-shifting thing. Service excellence used to be defined by great technicians. Then you needed great technicians who could also respond immediately. Now, it’s not enough to have great technicians with fast response times- your customers expect you to anticipate and prevent machine failures, avoiding a traditional service visit altogether. New technology is making preventative maintenance the standard expectation of savvy service customers. But industry mindsets haven’t quite caught up-and misconceptions about preventative maintenance technologies are a huge roadblock when reaching for that “service excellence” level. At PTC, we work closely with field service professionals to help propel them into today’s new level of service excellence-and raise the bar for their competitors. In this eBook, we’ll explore the five most common misconceptions we hear about remote monitoring for service and how to keep these outdated assumptions from holding you back. Misconception 1: Remote Monitoring Takes Too Long to Create Value-and When It Does, the ROI is Minimal Today’s tech-savvy customers know that there’s always an emerging app or gadget that can solve their problems-they just don’t know what it is. But they’re worried that their competitors do know and are already solving their problems with the push of a magic button. Remote monitoring can seem like a magical click-the too-good-to-be-true technology fad with lofty promised benefits, such as improving vital key performance indicators (KPIs) and customer satisfaction. And any new technology is scary-it’s easier to assume remote monitoring is just a buzzword and stick with the way you’ve always done things. Truth: Remote Monitoring Creates Quick Wins and Long-Term Results that Customers Notice Remote monitoring is much more than a fad-it’s the basic foundation for creating a future-forward service ecosystem. Customers will see the results for themselves in reduced downtime, improved mean-time-to-repair, and better first-time-fix-rates. And you’ll see higher customer satisfaction reflected in improved NPS scores. For customers, equipment uptime is invaluable-and a service team that guarantees uptime is irreplaceable. With a best-of-breed IoT platform that seamlessly connects a wide range of assets and delivers real-time performance data, your service team will have the “magic button” to prevent maintenance issues before they become maintenance problems. Misconception 2: Only the Service Team Will Benefit From Remote Monitoring Onboarding new technology is a daunting task-especially if the software only benefits one specific department. Selling upper management and IT on launching a remote monitoring solution can seem overwhelming when there are competing priorities for new technologies across the enterprise-and limited budgets. “Lincoln Electric, a global manufacturer of highquality welding, cutting and joining equipment, used remote monitoring to ensure that parts could be delivered seamlessly to service technicians and customers. Through this simple use case, Lincoln decreased service costs by $1.2 million and increased parts revenue by $2 million.” — Service Transformation: Evolving Your Service Business in the Era of Internet of Things white paper Truth: Remote Monitoring Benefits the Bottom Line Across the Entire Enterprise Remote monitoring provides immediate benefits across the entire enterprise-from arming technicians with in-depth equipment data to increasing departmental visibility for upper management. Newly accessible data-as well as improved uptime and other metrics that increase customers’ confidence-open up new service models, decrease internal costs and increase contract renewals. This ROI is often significant enough to be felt across the entire organization, turning your service department into a high performing, revenue-driving team. Beyond those immediate wins, remote monitoring is a key first step to on-boarding overall IoT solutions. The more sophisticated IoT capabilities-such as increased automation and cross-system orchestration-all depend on the groundwork laid through remote monitoring connectivity. For IT teams, remote monitoring is often seen as a beta test to run before wading into larger IoT projects. If the low-lift IT projects of remote monitoring aren’t sustainable, then larger IoT projects are likely also overly ambitious. Misconception 3: We Already Have All the Data We Need Remote monitoring and data connectivity is likely already some part of your service plan, either through internal workarounds or do-it-yourself IoT projects. You know the importance of customer service data because you’ve been gathering that information for years. Remote monitoring seems like just a fancier way to get the data that you’ve already been using. Maybe it’s a bit faster or more reliable through an IoT platform, but is that really worth going through a software buying process, to just get the same data you can get now? Truth: Real-time, Remote Monitoring Data improves Service Time, Reduces Service Cost and Improves Service Revenue The best way to debunk this misconception is to simply look at the proven results of service leaders who rely on remote monitoring (see figures below). Remote monitoring through an industrial IoT platform is simply the only way to get the in-depth, reliable and real-time data needed for faster service delivery and growth in service revenue. This is especially true when using an industrial IoT platform such as ThingWorx, which was purpose-built for your KPIs. Remote monitoring has provided PTC customers with: 50% Reduction in the meantime to repair 66% Reduction in service requests 33% Increase in customer satisfaction Misconception 4: Remote Monitoring Will Be a Burden On Our IT Team-and We Can Do It Better In-house The last thing you want to do is ask your IT team to constantly support, repair, patch and upgrade another new software tool. A remote monitoring platform will be useless if it means calling your overworked IT team for support every five minutes. Especially if you are using workaround solutions now-why not just ask IT to build your own unique, custom solution? “Managing huge amounts of real-time data requires thoughtful planning and the flexibility to address the various combinations of data required.” — Building A Framework: The Industrial Internet Consortium white paper Truth: Remote Monitoring Through an IoT Platform Provides Out-of-the-Box Benefits for Your Service Team-and Your IT Team In-house, custom-built remote monitoring solutions can often seem like the best way to gain the benefits of remote monitoring, without the added IT burden. But if your in-house IoT solution is successful, other departments will want to try IoT use cases for themselves. Your in-house IoT solution will turn into a behemoth workaround, weighed down under the burden of its own success. In addition, in-house experts are often good at maintaining one type of connectivity-but cannot easily scale beyond initial goals and often have limited knowledge of the organization-wide benefits of IoT. And the challenges of an in-house solution are usually unapparent until the task is already in full swing. For example, collecting remote data is one challenge, but displaying it, analyzing it, or otherwise turning the data into actionable intelligence in a timely and useful manner is a whole other issue. IT teams that are able to solve all of these issues are generally hard to come by. A major benefit of a best-of-breed IoT platform is that it comes with best-of-breed support. Experienced IoT experts can help you create a remote monitoring solution that fits your exact needs and easily scales to new demands. Experts know the common roadblocks to IoT and have purpose-built the platform and onboarding process to overcome them. Misconception 5: IoT is Too risky. We Can Wait Until It’s More Established Adopting new technology takes time, money and work. And in our world of security concerns and fast-moving technology, the wait-and-see approach can seem the safest route. If your company considers IoT an unproven technology, that alone is likely enough to stop any remote monitoring initiative. “In previous decades, only industrial giants were able to invest in connecting sensors, controllers, and data analytics. Today, the cost of sensors, connectivity, and analytics software has plummeted, making it possible for every operation to upgrade, retrofit, and prepare for digital automation . . . Companies that delay may find themselves outcompeted, out-classed, and out of business.” — Smart Factory Task Group, Smart Factory Applications in Discrete Manufacturing white paper Truth: Not Incorporating Remote Monitoring Puts You at Risk of Falling Behind Competitors at an Exponential Rate The in-depth data, predictive maintenance capabilities and cost-savings made possible by remote monitoring are revolutionizing the machine service industry-and demonstrating how internal service teams can drive revenue. IoT-based remote monitoring is far from unproven or insecure- rather, it is now the proven first step to providing predictive maintenance, improving machine uptime and increasing customer satisfaction. Companies that take a wait-and-see approach will definitely see the proven benefits of remote monitoring-when their competitors use remote monitoring to gain more business. Best-of-breed IoT platforms easily scale to new service and organizational goals, as needed. So the longer you wait, the more your competitors build on their IoT groundwork to get further ahead. Best-of-breed IoT platforms stay competitive by relying on market feedback to stay ahead of users’ needs and can easily address new industry challenges.
https://medium.com/predict/outdated-iot-assumptions-and-misconceptions-about-remote-monitoring-7e99558baebc
['Alex Lim']
2020-11-05 03:14:19.278000+00:00
['Technology', 'Tech', 'IoT', 'Internet of Things', 'Remote Working']
590
5 Python Exercises
5 Python Exercises Best Way To Strengthen And Practice Your Python Skills Recently, I posted an article that aimed to explain the key components of Python programming language. I received a number of messages whereby the readers asked me to post Python exercises. I wanted to post exercises that should really help one understand how Python works. After thinking about it for some time, I have come up with 5 questions which should test your Python knowledge. Please have a go and post your answers in the comments sections. I want to ensure I can help you solidify your understanding of the concepts better rather than giving you questions that we can simply memorize and answer. Here are 5 Python exercises. For each exercise, I will also mention the topics it is intending to test. By the end of the exercise, you will feel that you have gained a much superior end-to-end understanding of the language. Remember, there are multiple ways to peel an orange so solve the questions how you understand them. I will post the answers in an upcoming blog soon. Photo by bruce mars on Unsplash 1. Logging Using Python Decorator Implement a calculator class with following functions: Sum(a,b), Multiply(a,b), Divide(a,b) and Subtract(a,b) 2. Import logging library 3. Decorate each method of the calculator class with a custom method that logs the values of a and b. Implement the logger custom method too. 4. Execute calculator.Sum(a,b) and it should print out the values of a and b. For example: The Input Values Of A and B Are '123' and '234' # if a =123 and b=234 What Will It Test? It will test whether you understood Pip commands that are required to import the libraries How to create class and functions with arguments in Python How to use decorators 2. Tree Traversal Using Python Recursion Implement a class: Node which will be used to represent a tree. For example: class Node(object): def __init__(self, name): self.name= name self.children = [] def add_child(self, obj): self.children.append(obj) Each node has a name and children e.g. a = Node('A') a_goal = Node('Goal') a.add_child(a_goal) 2. Print out all of the paths of the tree which can lead you to the node named “Goal”. 3. The tree can have N number of levels (1>N>100). Goal node can have children too. For example, for tree below, your code should print out following paths: A->Goal A->B->Goal A->D->Goal A->F->H->L->Goal All other paths do not lead you to the Goal node. Write your code in a way that it can be unit tested. What Will It Test? It should test your understanding of recursion It should also test how you prevent from going into infinite loops — it will test your loops, expression and conditional logic It will also test your data structures understanding and variables scope Photo by Fabrizio Verrecchia on Unsplash 3. Flatten A List Of Nested Dictionaries Into A List Of Multiple Flattened Dictionaries Create an object which contains a list of dictionaries. Each item in the list is a dictionary which contains a number of keys. Each key of the dictionary will contain a value. The value can be of type string, or it can be a of type dictionary. When the value is of type dictionary then it implies that it is a nested dictionary within a dictionary. Each dictionary can contain a variable number of keys. Loop over the items and create a single dictionary to store keys at the same level. For example, if you loop over the items and if each item contains a dictionary with two keys e.g.“Name” and “Surname” and both of the keys contain values of type string then simply return the collection of dictionaries (as it’s already flat). However if it contains “Name”, “Surname” and “PlacesVisited” keys, where PlacesVisited is itself a list of dictionaries such that each item of the dictionary contains two keys “Name of place” and “date when it was visited” then I expect to see two lists as the result. First list should contain a collection of dictionaries with keys Name and Surname. The second list should contain the keys “Name Of place”, “date when it was visited” and ParentId where ParentId will contain the key “Name” of the first dictionary. Take the value of the ParentId as the value of the first key of the parent dictionary e.g. for the example above, “Name” is chosen as the ParentId. For each nested dictionary, create a new dictionary. Final result should be a number of flatten dictionaries to represent a nested dictionary For example, if this is your input: sample_object = [ {'Name':'Farhad', 'Surname:'Malik', 'Blogs':{'BlogName:'Python1','Date1':'20180901'}}, {'Name':'Farhad2', 'Surname:'Malik2', 'Blogs':{'BlogName:'Python3','Date1':'20180101'}} ] The result should be: dictionary_1 = [ {'Name':'Farhad', 'Surname:'Malik'}, {'Name':'Farhad2', 'Surname:'Malik2'} ] dictionary_2 = [ {'ParentId':'Farhad', 'BlogName:'Python1','Date1':'20180901'}, {'ParentId':'Farhad2','BlogName:'Python3','Date1':'20180101'} ] The key is to ensure that the items at the same level belong to the same dictionary. What Will It Test? It should test your understanding of dictionaries, arrays and sets. It should also help you understand how to check for keys and values Lastly, it will help you see how you can pass in optional parameters. This is how you can flatten out a JSON object. Photo by Kaleidico on Unsplash 4. Multi-Process And Error Handling Code Take the three exercises above, make the code run on multiple processes Use try/catch and catch exceptions where appropriate Profile and log performance of the code Write unit tests for each of the exercises that perform positive and negative tests What Will It Test? This will really help you see how to run your code on multiple processes How to catch exceptions and how to enable logging in your code to an extent that it is useful. 5. Package And Modules Create the classes and code that you have implemented above into a package with multiple modules Understand how the files should be placed and imported. Create a main class that drives everything Write out a console application that runs your unit tests via command line and informs you the tests that have passed or failed. What Will It Test? It should help you really understand and see how packages and modules work You will get a solid understanding of Python programming language Summary This article presented you with 5 Python exercises. Please post your answers in the comments section. I will post the answers in my upcoming Python blogs. If you want more exercises, please do let me know.
https://medium.com/fintechexplained/5-python-exercises-35b36f1ca742
['Farhad Malik']
2019-04-22 21:29:04.326000+00:00
['Technology', 'Python', 'Data Science', 'Programming Languages', 'Fintech']
591
What can technology do to us?
Listen on: https://anchor.fm/ohknow Technology had developed from the invention of the lightbulb to the newest iPhone 12. Imagine you lived in the 19th or 20th century. How could you survive without your phone? How could they survive without their phone? This showed how we nowadays are so desperate for technology. But, this also shows how much humanity is growing and will grow in the future. smallbiztrends.com Humanity seems so successful. But, if you look deep, we realize that we aren’t too successful. We have developed such technology that is destroying our lives. Take an example; gaming. Most of the younger generation right now are manipulated. Manipulated by video games. We have created such a thing that can turn humanity into a disaster. Technology has its up and downs. Social media is in most lives of adults and can ruin part of it. For example, on a phone, most people, probably even you have social media. Whenever you check a phone when you are bored, what is one of the first things you go to? You probably go on some type of social media. Then you are stuck to it. Checking continuously, it’s so hard to stop. This is the problem. Some technology is useful and some can hurt you. Even watching T.V, when you turn on the screen, you watch something. All that while you could be watching a live play right in front of your own face. A T.V. is just a shortcut, not to pay for a ticket, not to drive, which can be reasonable. But, what else could you be doing at that time? You can go outside, play a board game, make some art, etc. There are thousands of things you can do instead of watching TV, checking your phone, or playing a video game. Why would you rather want to check a friend's post on Facebook rather than to talk to someone and make a new friend? anthillonline.com Technology is an addiction. Technology has more to it. It is supposed to help out and make our lives easier. The thing is that while technology is growing, it is taking away lower-paid jobs. If you take the invention of the automobile, it shows that in the 19th century, there were people building automobiles. Now, there are robots. These robots are taking away tons of jobs. Robots can help in factories, farms, labs, etc. These are the reasons that technology is harmful to humanity. Thank you for reading! Watch our videos on our channel!
https://medium.com/@obtained/what-can-technology-do-to-us-b3b1ba18885d
[]
2020-12-25 00:26:35.665000+00:00
['iPhone', 'Devices', 'AI', 'Computers', 'Technology']
592
COMPLETED — LALA Transfer & Stellar Blockchain Integration, Testing with Other Stellar Anchors in Process
Dear LALA Family! After months of super hard work, we are glad to announce that we have reached a point where integration and connections with Stellar protocol are complete. We have signed NDA’s and agreements with various partner anchors of Stellar like Coins.Ph, TEMPO, MOIN etc. and are now in various stages of integration, documentation, and implementation. Delivering a Stellar Product — LALA Transfer We studied and considered almost every Blockchain for the remittance process for LALA over the last 8 months. Keeping our goal and vision of helping humanity and targeting the migrants and unbanked, we scrutinized several payment channels. We chose Stellar after several rounds of internal and external feedback because of its • Negligible transaction fees of €0.000001 • Settlement times of 5s on average And also because each payment would arrive direct from LALA Wallet sender to direct LALA Wallet receiver via your unique LALA ID acting as your full KYC — true peer-to-peer Satoshis vision. Further, we have developed a white label Stellar platform solution, and we intend to work very closely with Stellar for helping other clients to use this platform in a plug and play fashion. We have also developed algorithms and a central treasury desk to make markets between Fiat / LALA / XLM — an essential element of Stellar protocol that not many have been able to crack so far. We are immensely proud of the entire LALA team in delivering the results as expected. In essence, LALA Transfer is a one-stop money transfer platform that combines traditional money transfer platforms (western union, transfer wise, express money etc.), Blockchain based remittance solutions (Stellar now and more to come later), aggregators of remittances, MTO’s (Money transfer operators) and new age money sending methods (USSD SMS, mobile vouchers etc.) all in one place to give best rates at different locations globally. Product roadmaps and deliveries are also starting to roll out for other LALA Products. A detailed roadmap with updates will be out soon. Stellar Network is an open-source Blockchain network which allows financial institutions and diverse payment services to be interoperable, making payments cheaper, more efficient, and faster. LALA and Stellar will open doors to the real people who need remittance services by utilizing Blockchain as a solution. This will open a wide range of opportunities for other partners and anchors as well, by using the LALA white label remittance solution. With further testing and integration with couple more anchors, we expect to make live transactions within next 60 days. LALA Tokenswill further ride with XLM as base market making currency providing a huge opportunity for LALA to be used widely. In summary, LALA has yet again delivered what was promised. 1. LALA continues to make human life better by focusing on migrants and unbanked, and their families back home. 2. Stellar with their open source foundation, and almost nil fees, advances our financial inclusion goals one step further. 3. LALA has developed white label solutions on top of Stellar — one of the best Blockchain payment solutions today. 4. Market making helps both LALA as well as XLM equally to make LALA widespread along with other products like LALA Lends and LALA Pay. 5. LALA ID becomes the global E-KYC standard across the unbanked population to deliver sustainable solutions to such communities. Cross-border Remittance services coupled with LALA ID’s global E-KYC service is the center of all LALA World’s offering. With this integration, a strong foundation for the delivery of other services has been set, and we hope that this update would be a cause to rejoice for our community. We are pushing ourselves to ensure that we can meet the grueling timelines we have set. Our current processes and development positioned in sync and with our LALA partners, will only further strengthen the prowess of LALA. We believe that we are on track and couldn’t be more excited for what’s to come. To stay updated on the latest happenings & product news, join our Telegram community here. Many thanks for being the driving force in this tough, but a meaningful journey!
https://medium.com/lala-world/completed-lala-transfer-stellar-blockchain-integration-testing-with-other-stellar-anchors-in-729dbf0d7767
['Lala World']
2018-05-29 10:57:31.270000+00:00
['Technology', 'Stellar', 'Blockchain', 'Lala World', 'Bitcoin']
593
Celebrity status and how it corrupts us
As a creative, the idea of becoming famous in your field is an exciting idea, the idea that people all across the world could know your name truly is an intoxicating concept. What happens though, when that dream comes to fruition? Do we continue to ride the creative wave with a cult following to boot or do we sink away into the Rockstar lifestyle forgetting how we even got there? Let’s explore that idea. This the tale of two Johns — Carmack and Romero, two of the most renowned game developers in the industry, responsible for the creation of the influential first person shooter ‘DOOM’ (1993). Unbeknownst to them, DOOM would end up becoming the most ubiquitous pieces of software in the world (c. 1995), even overtaking Windows 95 with its sheer volume of downloads[1]. As you can imagine, Carmack, Romero and the rest of their team (known collectively as ‘Id Software’) attained a legendary status in the game development community, but how did the two of them handle their newfound status? Let’s find out. John Carmack (far left) and John Romero (second from the right) John Romero’s Resignation After the production of DOOM and its sequel (along with other less relevant games), Carmack and his crew began work on an ambitious project with their newfound riches — ‘Quake’, a game that would revolutionise the gaming industry as one of the first ever games to use full 3-dimensional graphics [10]. However, during Quake’s development there was a clash between Romero and Carmack over the future of id software. Carmack wanted to make slow and steady progress while Romero wanted the game to follow his over-ambitious vision. Carmack then accused Romero of not pulling his weight and focusing too much on living a Rockstar’s lifestyle with his newfound fame. John Romero later resigned after the release of Quake.[2] Carmack in the present Carmack would continue to produce games without John Romero, later leaving Id Software to pursue other goals[3], currently holding the title ‘consulting CTO (chief technology officer)’ of ‘Oculus’, one of the leading virtual reality businesses [4]. John Carmack never let the fame get to his head and always kept a solid vision in his projects (at least in my opinion). I believe he was the voice of reason to Romero who was an ambitious and hot-headed but a talented developer. What happened to Romero after he resigned, you may be wondering — I shall enlighten you. Ion Storm After his resignation, John Romero founded his own company ‘Ion Storm’ where he would create one of the most infamous video games of all time ‘Daikatana’[5]. Daikatana was met with immediate criticism due to its controversial marketing campaign which tried to sell itself on John Romero’s fame — but unintentionally just came across as offensive (see below)[6]. Not only that but the game had an unfinished feel with many glitches and near-unplayability [7]. Due to John Romero not having Carmack as his shoulder angel — and his unorthodox marketing approach, Romero ended up delaying the game several years from 1997–2000[8] as he spent too much time living a hedonistic lifestyle and not focusing on his craft[9]. It was clear that the fame had corrupted Romero and made him lose sight of what made his early projects like DOOM so thrilling. ‘John Romero’s about to make you his b*tch.’ and other terrible slogans. Perhaps however, this is not the fault of Romero, but instead a natural human impulse to become complacent. According to Lowers & Associates, one of the leading causes of complacency is the consequences of a moment of insight [11]. For example, after the success of Doom, Romero was so busy riding his wave of success that he failed to notice how distracted he was from his other projects. Instead of working hard on his future games he spent all day revelling in the success of Id Software’s big ‘eureka moment’. In conclusion, while fame is something we creatives all covet (or pretend not to covet), it is important that if we reach that level we stay humble and focus on our craft instead of the many temptations of being a celebrity. Follow in the steps of John Carmack and use the money to improve yourself and your talents — instead of spending it on quick thrills and materialistic objects. Dylan ‘Lee’ Burrows Reference list
https://medium.com/@dinodylan1036/celebrity-status-and-how-it-corrupts-us-8f221f3140ad
["Dylan 'Lee' Burrows"]
2020-12-23 10:12:56.127000+00:00
['Technology', 'Doom', 'Biography', 'Celebrity', 'Gaming']
594
Piggybacking, Acquisition Arbitrage, and Platform Risk
At its 2020 Partner Summit, Snap confirmed what many had been witnessing over the last two years. Its developer tools (“Snap Kit”) have allowed a number of startups to skyrocket through the charts. As of May 2020, over 20 of the top 100 iOS and Google Play Stores apps use Snap Kit. This reminded me of a good number of startups that have scaled by piggybacking existing platforms or using other forms of user acquisition arbitrage. These strategies clearly involve a potentially fatal risk — what happens if someone turns off the tap? However, I ask myself if early-stage venture-backed startups should embrace that if it means building and scaling faster and more efficiently than the competition, at least initially. Let’s start with definitions. Piggybacking and Acquisition Arbitrage 📈 Piggybacking describes the situation where a startup leverages the infrastructure or network of an existing platform to either: (a.) build components of its own offering; or (b.) acquire users, customers or viewers. In simple terms, it’s the entrepreneurial equivalent of climbing a mountain on someone’s back: For example, Snap Kit allows entrepreneurs to both access Snap’s technology to build product features (e.g., with LoginKit, BitmojiKit or CameraKit) and also share their products via Snap’s userbase (e.g., with stickers through CreativeKit). Piggybacking is also possible when a platform doesn’t release developer tools. Remember how Airbnb hacked its way into piggybacking CraigsList? Importantly, using another platform’s tools to build some core aspects of the product (vs. specifically acquisition tools) may have significant effects on acquisitions too. For example, using Snap’s LoginKit simplifies building the onboarding flows of your app, but it also lowers friction with Snap users and therefore facilitates acquisitions of these users.
https://medium.com/secocha-ventures/piggybacking-acquisition-arbitrage-and-platform-risk-9ba13395a8c
[]
2020-07-08 16:05:03.064000+00:00
['Technology', 'Marketing', 'Growth Hacking', 'Startup', 'Venture Capital']
595
Garage Door Repair — Leave Torsion Springs to the Professionals
Supplanting torsion springs is a dash of work that should be done incidentally. There are loads of districts that disclose to you how to do it with no other person. Regardless, torsion springs are incomprehensibly dangerous and, beside in case you are VERY arranged with the correct instruments and experience, and beside if you give the strictest idea while propelling them, you could lose fingers, furthest points or even your life. Instead of trying to do it with no other individual’s help, it is very suggested that you use a garage door repair able to finish the obligation in regards to you. What Are Torsion Springs Torsion springs are a crucial piece of your garage door. These are metal springs that are the key fragment to be settled framework that opens and closes it. These doors check a couple of pounds; even the lightest may weigh as much as 100 pounds. Precisely when the torsion springs are completely surrounded, the doors are open; when the doors are straight, the springs are widened straight. In the two positions, these springs are under groups of strain. They are in like way staggeringly significant. Now and again, they should be supplanted. On the off chance that they wear out and break while it is being used, it will all in all be remarkably dangerous for anybody standing close-by. Company Information: Direct Service Overhead Garage Door Company 224 Country Club Pkwy Maumelle, AR 72113, USA Phone: (501) 244–3667
https://medium.com/@ora8988/garage-door-repair-leave-torsion-springs-to-the-professionals-4ffe377f8d34
['Ora O']
2019-05-11 13:41:43.162000+00:00
['Company', 'Garage', 'Technology', 'Services', 'Door']
596
Try_Hack_Me Cyber Fundamentals learning path
Hi sec folks, New lab released on Try_Hack_Me Time to Sharp your skill Are you Want to HACK_THE_WORLD Join Up on Try_Hack_Me Use this link Pre-Security is a new lab and learns much more you want. Before hacking something, you first need to understand the basics. Cybersecurity basics Networking basics and weaknesses The web and common attacks Learn to use the Linux operating system Introduction This learning path will teach you the pre-requisite technical knowledge to get started in cybersecurity. To attack or defend any technology, you to first learn how this technology works. The Pre-Security learning path is a beginner-friendly and fun way to learn the basics. Your cybersecurity learning journey starts here! complete a room that’s part of this path and win tickets, get 3 of the same to redeem a prize. If you’re a free user you can win 1 ticket, however subscribed users can win 2 tickets. What is the Pre Security path? Learn the pre-requisite technical knowledge to get started in cybersecurity. To attack or defend any technology, you to first learn how this technology works. The Pre-Security learning path is a beginner-friendly & fun way to learn the basics. Thanks, TRY_HACK_ME Team.
https://medium.com/@narendran182000/try-hack-me-cyber-fundamentals-learning-path-3ddbd67cdc9e
[]
2021-07-06 00:34:22.044000+00:00
['Information Technology', 'Tryhackme', 'Infosec', 'Tryhackme Walkthrough', 'Bug Bounty']
597
Samsung Galaxy S10e : Latest and new smartphones 2020
Smart phone is a cell phone that provides advanced technology with a similar functionality to a personal computer. Although offering a structured interface for application developers, a smartphone serves as a full operating system program. Second, mobile features such as the internet, instant messaging and e-mail, as well as built-in keyboard features are also very common. For these purposes, a miniature device with the similarities of a simple phone may be said to be a smart phone. Samsung Galaxy S10E The Galaxy S10e is the smallest and lightest of the three new 150g Galaxy S10 versions. It feels incredibly secure to wear, and the rounded edges are not abrasive to the skin. The back of the aluminum and glass is a little slippery when used with one hand, and the phone attracts fingerprints quickly. The Samsung Galaxy S10e has a 5.8-inch full-HD+ Dynamic AMOLED display with 10+ HDR certification. At the back of the handset, we found a few more cuts compared to other Galaxy S10 versions. It ditches the telephoto camera so you just get the key dual-opening sensor, the 12-megapixel sensor and the 16-megapixel ultra-wide angle one. General program and user interface efficiency is fantastic. Even high-end games like PUBG Mobile performed well. The Galaxy S10e can easily monitor them in the highest settings thanks to its efficient SoC and low screen resolution. The Galaxy S10e has the smallest 3100mAh battery of the series and has a major impact in regular use. We have generally got around 17 to 18 hours of battery life. Detailed landscapes and macros with a strong color spectrum are captured by the phone during the daytime. The software is easy to use and features a creative interface. Well with a strong edge detection, living emphasis works. Background blurring effects may also be applied. Price of Samsung Galaxy S10e The price of Samsung Galaxy S10e in India begins at €47,650 (6GB/128GB). Samsung Galaxy S10e’s lowest price is €47.650 on 18 December 2020 on Amazon. Details Specification Of Samsung Galaxy S10e Originally published at https://neverendingfoot-steps.blogspot.com.
https://medium.com/@paudelganesh800/samsung-galaxy-s10e-latest-and-new-smartphones-2020-40037c9289e7
[]
2020-12-27 12:09:30.961000+00:00
['Samsung', 'Blogging', 'Technology', 'Smartphones', 'Blogger']
598
What is ETL?
In our articles related to AI and Big Data in healthcare, we always talk about ETL as the core of the core process. We do not write a lot about ETL itself, though. In this post, we’ll give a short overview of this procedure and its applications in businesses. ETL is the abbreviation for Extract, Transform, Load that are three database functions: Extract is the process of reading data that is assumed to be important. The data can be either raw collected from multiple and different types of sources or taken from a source database. is the process of reading data that is assumed to be important. The data can be either raw collected from multiple and different types of sources or taken from a source database. Transform is the process of converting the extracted data from its previous format into the format required by another database. The transformation occurs by using rules or lookup tables or by combining the data with other data. is the process of converting the extracted data from its previous format into the format required by another database. The transformation occurs by using rules or lookup tables or by combining the data with other data. Load is the process of writing the data into the target database, data warehouse or another system ETL in its essence is a type of data integration used to blend data from multiple sources. ETL vs. ELT The ETL paradigm is inherent to Data Warehousing, and Big Data has significantly changed the order of the processes. In Big Data, data is “lifted and shifted” wholesale to a repository, such as a Data Lake, and is held there in the original format. It is transformed “on the fly” when needed by Data Scientists, creating the procedure of ELT, or Extract, Load, Transform. One of the main benefits of ELT is a shorter load time. As we can take advantage of the built-in processing capability of data warehouses, we can reduce the time that data spends in transit. This capability is most useful when processing large data sets required for business intelligence and big data analytics. In practice, however, things are not so black and white. Many Data Lakes, for example, contain intermediate merged and transformed data structures to ensure that each Data Scientist doesn’t repeat the same work, or carry it out in a different way. Where are ETL/ELT used? ETL is not a new technology: businesses have relied on it for many years to get a consolidated view of the data. The most common uses of ETL include: ETL and traditional uses Traditionally, ETL is used to consolidate, collect and join data from external suppliers or to migrate data from legacy systems to new systems with different data formats. ETL tools surface data from a data store in a comprehensible for business people format, making it easier to analyze and report on. The key beneficiaries of these applications are retailers and healthcare providers. ETL and metadata ETL provides a deep historical context and a consolidated view for the business by surfacing the metadata. As data architectures become more complex, it’s important to track how the different data elements are used and related within one organization. Metadata helps understand the lineage of data and its impact on other data assets in the organization. ETL and Data Quality ETL and ELT are extensively used for data cleansing, profiling and auditing ensuring that data is trustworthy. ETL tools can be integrated with data quality tools, such as those used for data profiling, deduplication or validation. ETL and Self-Service Data Access Self-service data preparation is a fast-growing field that puts the power of accessing, blending and transforming data into the hands of business users and other nontechnical data professionals. ETL codifies and reuses processes that move data without requiring technical skills to write code or scripts. With this approach integrated into the ETL process, less time is spent on data preparation, improving professionals’ productivity.
https://medium.com/sciforce/what-is-etl-1df5305bb341
[]
2019-09-27 12:48:33.283000+00:00
['Artificial Intelligence', 'Healthcare', 'Big Data', 'Technology', 'Data Science']
599
Write your own Smart Contract and deploy it…
In this article, we will watch how we write our own contract and deploy as I mentioned in my previous article that I learned why we need a smart contract and where we need I will provide you in a comment section where you can read all the stuff that is related to smart contracts. So, now in this article, I will practically teach how you will write your own Smart Contract. Now, you open your browser and search this online http://remix.ethereum.org/ this website. This is Ethereum Virtual Machine (EVM) where you can write your first contract. You can either copy the above-mentioned link and you can find your EVM machine. Remix Online Editor This is the page you can view when you open the link. To compile and deploy we need to activate the setting. For this, you click on the solidity black arrow is showing solidity button when you click on the button your setting will be activated. Next, open your new file click on the plus button red box is showing where is it. Write any name, for example, my_name.sol After creating the file you will view the blank page here is the code that I write for you write this code. Ok now I will explain to you what is going on here in this code. The first tells you that the source code is written for Solidity version 0.4.0, or a newer version of the language up to, but not including version 0.7.0. Pragma is common instruction for compilers about how to treat the source code. A contract in the sense of Solidity is a collection of code (its functions) and data (its state) that resides at a specific address on the Ethereum blockchain. And you can write any name of your contract for this demo we write myContract. Just like any other programming language, we take a string with a public keyword with a variable name is myname. This whole line declares a state variable. So now the question is why we need a public keyword. The keyword public automatically generates a function that allows you to access the current value of the state variable from outside of the contract. Without this keyword, other contracts have no way to access the variable. After we declare a function with name writeanyname and then we declare a parameter string. Now, why we used memory keyword. In functions without the memory keyword, then solidity will try to use the storage structure, which currently compiles, but can produce unexpected results. Memory tells solidity to create a chunk of space for the variable at method runtime, guaranteeing its size and structure for future use in that method. And says it public. The public keyword in the function that we call this function easily after deploying a contract. This is code that I write for you and for very beginners who are here and want to learn this solidity language. Compile your contract press the compile button for me it is Compile hi.sol Then click on the black box button and deploy your first smart contract click on the deploy button here we need some gas to deploy our smart contract here it is free. Click on the Deploy button and deploy your contract. Here you view this open this and then. Click on the myname button here you view you Hi World variable. Here you can write any name, anything you can write with double string, and then click on the writeanyname button and then click on the nyname button where you find your own written name. So, this is all of this. I hope after a lot of struggle you are happy face because after this article. You can successfully write your own smart contract and also deploy it. For now, take care, stay safe, and stay home. Related Article:
https://medium.com/@umairansar000/write-your-own-smart-contract-and-deploy-it-73e80486afba
['Umair Ansar']
2020-05-10 15:03:46.175000+00:00
['Blockchain Technology', 'Solidity', 'Etherem', 'Smart Contracts']