Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
900
Entrepreneurs: If You’re Looking for Podcasts in 2020, Pick These
Stuff You Should Know For random knowledge, SYSK is the place to go. This award-winning podcast comes from the writers over at HowStuffWorks and is consistently ranked in the top charts. Every Tuesday, Thursday, and Saturday, Josh Clark and Charles W. “Chuck” Bryant educate listeners on different topics. No matter the topic, they always cross-connect with pop culture. Want to learn how going to the moon works? How yawning works? What prison food is like? After lots of time listening, you’ll end up feeling like you’ve completed a degree in Out-Of-Left-Field Things. Business Wars There are fascinating stories behind many of the household-name companies and products that we all know. Business Wars host David Brown takes you through the audible journeys that brought many of these companies and products to what they are today. Grasp the details of how Evan Spiegel grew Snapchat to go head-to-head with Facebook, or listen to the battle in the chocolate market between Hershey and Mars. The use of great sound effects and creative narration by this Wondery podcast makes the listening experience comparable to watching a documentary. Reply All For tales that keep you listening, tune in to Reply All. Focused on how people shape the internet and how the internet shapes people, hosts PJ Vogt and Alex Goldman have lively discussions about random yet intriguing situations and dig deep. One episode, The Snapchat Thief, is about how the identity of a Snapchat account hacker was investigated and (spoiler alert) eventually found. Another episode, called Adam Pisces and the $2 Coke, is about the occurrence of a flood of strange Domino’s Pizza orders. Each segment is about 30 to 45 minutes long, a good length for the average commute. How I Built This with Guy Raz Chances are, you’ve at least heard about HIBT. Produced by NPR, this is a podcast about the stories behind the movements built by entrepreneurs, innovators, and idealists. Each weekly episode is 30 to 60 minutes of conversation between host Guy Raz and a notable guest. You can hear about the origins of Atari (and Chuck E. Cheese) from Nolan Bushnell himself, and about how Sara Blakely founded Spanx. You can listen to Drybar’s Alli Webb, or to Haim Saban’s story about Power Rangers. If you want to learn about the in-depth process and interesting hurdles that go hand-in-hand with groundbreaking success, you’ll enjoy this. Every Little Thing Similar to Stuff You Should Know, ELT is a goldmine for random facts. As the host, Flora Lichtman takes you through some of the most pressing questions out there. How are new stamp designs created? What are dogs saying when they bark, and why do auctioneers talk so fast? How do you make that pumpkin spice flavor we all know? This podcast also has a wide variety of invited guest speakers. In one segment you can hear from an airline pilot, and another you can learn from a microbiologist. If you’re someone who likes to learn something new every day, these segments have you covered. Syntax.fm If you happen to be a hardcore tech geek or want to get accustomed to tech lingo, you’ll love Syntax.fm. The hosts, Scott Tolinski and Wes Bos, teach web development for a living, so they have a wide range of tech fluency, from JavaScript to CSS to React to WordPress. Although niche, these are topics that influence the work of many. They have unique segments like the Spooky Stories episodes, through which you can hear about moderately-disastrous tech-related incidents. They also discuss more general topics, like design foundations for developers and how to get better at solving problems. Episodes are light-hearted and full of awesome info. The Pitch If you’re a fan of Shark Tank, you will enjoy tuning in to The Pitch. The show, hosted by Josh Muccio, features entrepreneurs who are in need of venture funding and pitch investors, live. The goal is to give listeners an authentic look into what it’s really like to get involved with venture capital. You’ll hear from one entrepreneur per episode, so you’ll get into the details. You’ll hear stories about new businesses, post-pitch pivoting, and will even get to follow folks through their journey months after their pitch.
https://medium.com/swlh/entrepreneurs-if-youre-looking-for-podcasts-in-2020-pick-these-15e4b613006b
['Ben Scheer']
2020-01-05 10:33:13.551000+00:00
['Business', 'Startup', 'Podcast', 'Technology', 'Productivity']
901
Serverless Monitoring Is No Longer “Finding a Needle in a Haystack”
At Human AI Labs (hu.man.ai), formerly Luther.ai, we use AWS Serverless Stack and Kubernetes for all the core real-time pipelines, and it is data-driven execution across all the AWS Services — ECS, Lambda, SQS, Fargate, etc. With hundreds of services with thousands of invocations, every day comes the complexity to configure, execution monitoring, logs review, latency measurements, etc. For configuration and CI/CD, we use serverless for the packaging and deploying Lambda functions. However, we tried multiple monitoring solutions, including just leveraging AWS native options; however, the scale brings various issues like - Multiple programming languages are used for AWS Lambda development. Containers / Tasks used in ECS with EC2 and ECS with Fargate. External service access ( outbound API calls ) review, including unauthorized calls. Persistent Storage access review and latency measurements. Unified access to execution logs with options of searchable across with full text and time-based. Contextualization of the service-based view. Proactive notification to places where we work — slack, pagerduty, etc. History of the events, searchable, etc. Along with the specifics above, the development team wants to focus on the serverless function rather than increase its monitoring footprint, causing lots of worries to on-call DevOps engineers. After lots of review of various services to solve the issues listed above, Epsagon is the solution we implemented. Here is the journey of how it saved hours and hours for our serverless implementation. Let us break the journey into the installation, monitoring, latency measurements, and notifications. Installation / Onboarding: If you as a reader and does AWS serverless development and not using lambda layers, you miss a core feature that will help a lot. Auto-tracing for your AWS region lambda functions is enabled with a simple workflow leveraging the lambda layers, thus solving multiple programming languages development dependencies to enable monitoring, including custom logic per serverless function. Monitoring: Once you have the auto-tracing enabled, proactive monitoring will help you with alerts and notifications. We use native slack and email integration to receive notifications. Each of the notification has the contextual link to the alert where AWS CloudWatch log, the start time of the lambda execution along with the service map of all the services used ( external API calls, AWS Services inbound and outbound) is available as a quick action — handy link to AWS. If you are like me who are interested in patterns, you can use the historical ( available for the last 7 days) view for all the issues to understand scenarios like if any of the last deployment, any specific user action, and or its scalability caused — I have an interesting issue which we uncovered using historical patterns but for a future blog not now. Epsagon — Tabular view of all the lambda functions Latency measurement: With 100’s of services and multi-thousands of executions every day, even a couple of milliseconds, etc., execution time added to one service in the real-time pipeline can result in a bad user experience. With Epsagon, it is efficient to isolate latency issues in multiple facets. As in the picture above, a unified view across all the functions is available with the Average Execution duration — which is a great start. For each lambda execution, a contextual service map is available with the service calls' time duration. With the Service Map view, we can review all the lambda dependent services as a single landscape with the capability to go back in time and understand the subtleties. Epsagon — Service Map View example Notifications: With all the integrated features, we configured extensible alerts — of which we use PagerDuty integration for core functions both for high latency, errors. Also, native integration with Jira helps to document a bug from the tool itself for each of the issues. The contextualization of information captured in the bug is a key feature — no more to worry about, did you took the log capture, what time was the issue, etc. Did I say that we self-configured from start to finish in a weekend? Yes, we did both for our dev, prod environments. Here are the resources that have proven handy AWS Workshop — https://epsagon.awsworkshop.io/ Epsagon Docs — https://docs.epsagon.com/ and AWS Activate program — https://aws.amazom.com/activate/ To conclude, serverless deployments, monitoring of workloads with hundreds of services and millions of invocations is no longer a “needle in a haystack.”
https://medium.com/humanailabs/serverless-monitoring-is-no-longer-finding-a-needle-in-a-haystack-402a1b51a78b
[]
2021-02-24 03:58:08.212000+00:00
['AWS', 'Monitoring', 'Technology', 'Serverless', 'AI']
902
Thalmic Labs’ Next Phase: Announcing $120M USD in Series B Funding
Superpowers, here we come. Today, we’re excited to announce our $120M USD Series B Funding led by Intel Capital, The Amazon Alexa Fund, and Fidelity Investments Canada, with participation from existing investors including Spark Capital. This new investment will help us realize our vision for the next era of computing, where the lines between humans and digital technology become increasingly blurred. We’ve been reimagining human-computer interaction since we first created the Myo armband in 2012 and are proud of what the team at Thalmic has accomplished to date. With Myo, we built an entirely new type of sensor from the ground up, made breakthrough advances in gesture recognition, and ultimately turned science fiction into reality for tens of thousands of customers in over 150 countries. Our developer community took the technology further than we ever imagined possible: researchers are using the Myo armband to train amputees on how to use their prosthetic limb and to translate sign language, while some developers have built incredible virtual reality experiences. Myo was just the beginning. We have new products in the pipeline and are excited to share more soon. Nearly four and a half years in, what we’re most proud of is the incredible team of people who have joined us. It’s a team that lives at the intersection of science and engineering; reasoning from first principles and pioneering new paths. We’re humbled every day to work alongside our colleagues at Thalmic and are always looking for those sharing our passion and vision to join the team. We’re hiring experienced team members for sales, marketing, and business development positions in our new San Francisco location while growing our engineering, R&D, and design teams in Waterloo. This financing marks an exciting milestone for us, however, it’s just one step on a long journey ahead of us. Today is still day one. We wouldn’t be where we are today without our early supporters, our Myo community, and our investors. Want to be the first to know what’s next? Sign up. — Aaron Grant, Matthew Bailey, and Stephen Lake Co-Founders, Thalmic Labs
https://medium.com/thalmic/thalmic-series-b-f8f9e9c026bf
['Stephen Lake']
2016-09-19 18:47:41.539000+00:00
['Wearable Technology', 'Venture Capital', 'Technology', 'Startup']
903
Why is it difficult to hack a Public Blockchain?
Why is it difficult to hack a Public Blockchain? A Public Blockchain is a decentralized system. In such a system there are multiple nodes and each node is synced with the entire Blockchain database. To hack such a system, a hacker has to alter information in all the nodes. This is practically impossible. Even if somebody is able to hack it, the value of hacked coins goes down exponentially. Due to this the return on investment becomes very low for a hacker. In a public Blockchain the cost of transaction is high because a transaction is verified by thousands of users. Due to this a hacker has to pay a large amount of money to alter such transactions.
https://medium.com/@blockchaintrainer/why-is-it-difficult-to-hack-a-public-blockchain-21fd8f628b56
['Chintan Dave']
2020-11-13 18:02:03.661000+00:00
['Blockchain', 'Blockchain Technology', 'Blockchain Application', 'Blockchain Startup']
904
The Release Manager model
However, someone else “accidentally” merged another pull request before and didn’t ship it to production (no issues until now), and the previously merged code contained a bug that caused a serious incident. The first action is to revert the last known pull request, which doesn’t solve the problem and the next actions will be: start calling people and digging into the last merged pull requests as a try to identify the buggy code to do the right revert. The above scenario highlights a few problems: No explicit recovery point in case of failure No release or changelog mechanism Lack of protection for human error Error-prone & Miscommunication issues The schedule Deploy calendar notification on Slack channel As an attempt to better organize deploys, a shared calendar was created with scheduled deployment windows where anyone from the engineering team could simply pick a free time to deploy their code to production The scheduled windows were divided into 4 windows: Morning : 7:00 AM to 8:00 AM BRT : 7:00 AM to 8:00 AM BRT Lunch : 1:00 PM to 2:00 PM BRT : 1:00 PM to 2:00 PM BRT Late Afternoon : 6:00 PM to 7:00 PM BRT : 6:00 PM to 7:00 PM BRT Early Morning: 2:00 AM to 5:00 AM BRT This addresses the visibility issue but still has a communication overhead and starts to slow down the engineering team since the team starts growing and becomes hard to find a spot on the deployment windows. What about Continuous Deployment? First of all, there’s the somewhat pedantic fact that deployment does not imply release. But what makes continuous deployment special is deploying every change that passes the automated tests (and optionally a short QA gate) to production. Continuous deployment might seem extreme and dangerous Teams usually need some breathing room between development and deploy, especially with database migrations changes when it’s very important to have a clear safe point to rollback in case of failure and ensure that the code and the database are backward compatible and so on. And because of that only a few companies properly run this practice nowadays… As an attempt to solve the productivity issue we started some tests with what we called “continuous deployment for real”… In the first design, we tried to automate the current deployment strategy since the development cycle already uses branching to split environments. Continuous deployment draft back 2018 The initial implementation was made with the staging environment, which already had its own branch and was a safe place to fail, and definitely improve the engineering productivity because everyone was able to just push the code to the branch and get it ready into a real-world environment… But (this world is so cruel) we face some problems, which in fact could be very harmful in the production environment. One of the challenges comes with database migrations, in the first place it wasn’t so hard to automate the execution of the migration if it exists, but to scale the process we’ll also need to create a lot of tooling to safeguard failure scenarios and create a stable recovery point in case of a rollback. To be Continued… Using the branching approach in mind, we also tried to use the Gitflow workflow and couple each branch to an infrastructure environment (i.e.: development, staging, and production) However, the GitFlow is a bit complex which should take too much effort to automate the entire process. Furthermore, we face an organizational issue where we have multiple teams working in several features that can have different release dates and would be very hard to align everyone to make a single release. Automating GitFlow Evolving for stability All previous approaches were driven by speed and productivity, but this philosophy becomes a bottleneck when the engineering team starts to grow which makes that the “deployment queue” becomes bigger and bigger, which doesn’t leave too much time to observe the system in production. During the 2018’s Black Friday we identified that we need to change the way we ship changes to production because we’ve identified that we were moving so fast that was hurting us, mainly because we were shipping bad code to production and increasing the number of outages. Loggi’s 2018 Outage Main Reasons As the company grows the business becomes more critical and less susceptible to failures. That is a very common path for growing companies, the Facebook is an example of that, where Zuckerberg said in 2014 company’s F8 conference, referring to the “Move Fast and Break Things” philosophy “What we realized over time is that it wasn’t helping us to move faster because we had to slow down to fix these bugs and it wasn’t improving our speed.” But slowness isn’t good to a Startup, and we put ourselves to think about how can we keep moving fast and stable… Changing (a bit) the development cycle Since we’ve dropped the idea to go with the GitFlow + Continuous Deployment approach, mainly for the production environment, we’ve decided to get the best from Feature Branch Workflow and Google’s SRE Release Engineering Branching strategy The main branch is the single source of truth and code into this branch is considered stable and ready for deploy And, with that said we keep the development cycle very similar to the Feature Branch where all feature development should take place in a dedicated branch instead and merged back in the main branch Image by Kasun Siyambalapitiya Google’s SRE releasing strategy said that projects don’t release directly from the mainline, instead, a new branch is created from the mainline and named with a specific revision version and become read-only. We’ve dropped the GitFlow strategy because of the huge number of branches coming and going, we decided to use git tag as a version marker which is like a branch but doesn’t change allowing us to keep the atomicity. Even after all the previous work we still have the organizational issue mentioned before and still an open question: What is a release for us? And when we should do it? At the end of 2018’s holidays, we introduce the concept of the daily deploy which was our definition of a release and also when it will happen. The Red / Black deployment Since our main goal was to increase the stability we started the year using a more reliable deployment approach that we called “Push do Macunha” in honor of its creator. The design uses a similar approach of the Canary Deployment but we rename it to “Red / Black” (believe it or not, we already had the blue/green terminology in our platform) On a daily basis scheduled time, a new tag is automatically generated, and the pipeline is started leaving the code ready to be deployed, waiting to be approved. Loggi’s “canary-like” deployment overview The Release Manager Ben Treynor advocates on your talk about Keys for SRE that the SRE team needs to keep the developer teams into the production loop and share around 5% of the operation work and, during the 2018’s holidays (including Black Friday) the deployment process was being handled by the SRE team which increase the operational load and moreover, transforming the deploy process in a “mystic creature”. Image from me.me SRE team doesn’t run the production alone, opposite of that, we believe in a shared responsibility environment that everyone on the engineering team can run Loggi’s main application deployment process and, inspired by Google’s Release Engineer discipline we’ve created the Release Manager. The main objective of the Release Manager role is to spread the knowledge, the operational load across the engineering team scaling the deployment process in multiple teams. The Release Manager is responsible for the following actions: Coordinate the Loggi’s main application deployment process. Monitor and take the decision to move forward or rollback the code. In case of a rollback, it will be in charge of identifying and reverting the offending PR. We encourage everyone to be part of the rotation, but there is no obligation to be part of the process and, with that said we’ve also created a Release Manager Rotation, which is a list with the names of everyone which applied to be a release manager. The first version of the Release Manager rotation sheet The first list born with 15 names and today we have more than 40 release managers The deployment process isn’t linear Even with the process running for a long time we realized that the new release manager’s confidence was low, mostly because of the number of decisions which need to be taken during the deployment mainly in failure cases Deployment workflow overview To reach the 40+ release managers we have today, we needed to address this issue in some way, which is also an issue for new SRE members. So one of the new SRE members that were recently joined the team had attended Loggi’s Xboarding, and also the SRE’s, realized that it could use the same methodology to give more context to new release managers in a more practical way, and the Release Manager Onboarding was born. The process was born to be self-regulated creating a continuous learning environment and for that, we’ve created a classification list based o the previous experience as a release manager (RM for short) Full RM — The experienced release manager that has already done several deploys and is able to “spread the word” to the other members. Assistant RM — A less experienced release manager which already attended some deploys as shadow and has absorbed sufficient context to run an entire deploy by themselves paring with a Full RM. Shadow RM — Since the process born to “spread the word” and transfer the knowledge across the engineering team, anyone can be part of the deploy process as a Shadow (another inspiration from our hiring process), which is an observer. The face of the Continous Learning approach Everything is done through a conference call allowing everyone to participate and the Full RM should follow the normal flow of the deployment, but the difference is that it will share its screen with everyone to share their knowledge and experience References
https://partiu.loggi.com/the-release-manager-model-7af93f9f499f
['Italo Santos']
2020-11-08 04:54:30.923000+00:00
['Logistics', 'Technology', 'Sre', 'Continuous Delivery', 'Site Reliability Engineer']
905
16th June 2019
India: Democracy or Datacrazy? In the tech world, India is many things. Provider of skilled labour. Ground zero for the battle between the American and Chinese tech ecosystems. The largest potential open digital market. Eager adopter of new technologies. But what does this mean for India itself? What does it mean for Indian democracy? As a primary target on the global tech giants, it seems natural that India would be in the frontline for the uneasy relationship between big tech and democracy. + John Harris in The Guardian The Internet Giveth, And the Internet Taketh Away In September 2018, All India Bakchod (AIB) was India’s edgiest, most sought after comedy collective, which was part funny and part weird (Remember this?). It was supposed to be the irreverent, woke voice of a new generation of urban Indians. By the end of October however, AIB was relegated to the pages of virtual history. Its meteoric rise can be attributed to the creative freedom offered by platforms like YouTube. Its fall came about in the midst of the #MeToo movement that hit India in 2018, which itself arose because of Twitter. AIB’s story, therefore, is in many ways a chronicle of the #MeToo movement in a microcosm. It is also a tale of how the internet made a new generation of cultural entrepreneurs and knocked them off their pedestal. + Ankur Pathak in Huffpost India Credits: thehauterfly.com Is the Clock Ticking for Tik Tok in India? The pop-philosophical group Nickelback once crooned that “we all just wanna be big rockstars and live in hilltop houses driving fifteen cars”. For most people this just a dream. But in India, apps like Tik Tok have paved the way for ordinary users to become celebrities overnight. Its allure has brought over a million teens and twenty-year-olds to upload content daily on its platform. The constant attention that is needed to fuel the Tik Tok celebrity culture often results in Tik Tok stars performing irrational antics and opening up their personal lives for the entertainment of others. Maybe this doesn’t seem to be a big deal for a lot of us who are used to such a celebrity culture on platforms like Instagram (there is also something to be said about the differing socio-economic profiles of the primary users of Instagram and Tik Tok in India), but there are rather drastic consequences for the unfortunate few. This is the story of Tik Tok and its connection to two murders in New Delhi. + Snigdha Poonam in Hindustan Times Credits: Gizmodo It’s Time Facebook Faces the Facts Over the last year or so, Facebook has come under a lot of flak for a whole host of issues. And while it does seem to be trying to respond positively to these issues, in India, Facebook’s largest market, it seems to have hit an unprecedented problem- sheer linguistic diversity. In India, the social media giant is unable to remove anti-LGBTQ+, anti-Muslim and pro-extremists opinions swiftly or effectively, and is having an especially hard time eliminating hate speech and misinformation in non-English languages. If you’re reading this and are of the opinion, “let’s give them a break, it’s a hard task to sift through so much content” then on any other day we would agree with you, but this week, civil rights activists have called FB out on their lax attitude towards solving this issue. Equality Labs (a South Asian American advocacy group) revealed that 93% of the posts it reported to Facebook that contained speech violating FB’s own rules still remain on the platform. Given the political clout of misinformation plaguing the country, it’s high time we sought a novel way to tackle this issue, in a systematic manner that takes into account the linguistic context of these posts. Although FB arguably might be trying its best to solve a rather difficult issue, there is more to be done, by them, us and the state. + Megha Rajagopalan in Buzzfeed News Credits: abplive.in No Country for Violent Videogames? India never really had much of a mass gaming scene. Internet connection was fairly spotty until recently, and most games and platforms were priced out of the range of the average Indian. That is until the mobile revolution and the availability of cheap, fast data. Now mobile gaming is everywhere and no game is more ubiquitous than PUBG. The first-person shooter has become a pop cultural phenomenon. There are PUBG themed restaurants, PUBG parties, and campus competitions. As can be expected however there has been a significant backlash against the game, with concerns expressed regarding its effect on teenagers. Whatever such concerns however ideally no democratic country should ban a game or jail people for playing it. Unless of course, the country is India. The fascinating, weird, and scary story of the boys who were jailed for playing a game. ngl. + Pranav Dixit in Buzzfeed News
https://medium.com/team-digital-republic/16th-june-2019-ffb90b0528e6
['Digital Republic']
2019-06-16 07:24:29.196000+00:00
['Society', 'Newsletter', 'Technology News', 'Technology', 'India']
906
My Recap of KubeCon 2019’s “Running Istio and Kubernetes On-Prem at Yahoo Scale”
This article will summarize a talk given by Suresh Visvanathan & Mrunmayi Dhume of Verizon Media at KubeCon San Diego 2019. If the video recording of the presentation is posted, I will link it here directly in the article so you can follow along. Yahoo! has more than 18 production grade Kubernetes clusters; Visvanathan’s team operates one that has more than 150,000 containers, 500 applications, and 1,000,000 requests per second. Their most mission-critical applications, such as Yahoo! Sports, Yahoo! Finance, and Yahoo! Home, are deployed and enabled by Kubernetes and Istio platforms. The talk covered how the teams worked on modernizing their platform to a microservices architecture. Among their goals were:
https://medium.com/cloud-native-the-gathering/my-recap-of-kubecon-2019s-running-istio-and-kubernetes-on-prem-at-yahoo-scale-d5621907fb6e
['Tremaine Eto']
2019-11-20 21:28:45.497000+00:00
['Software Engineering', 'Technology', 'Software Development', 'Kubernetes', 'Istio']
907
The Three Levels of Software Safety
The more software eats the world, the more critical safety is … but what exactly does that mean? Hacker image by catalyststuff Software engineers are bad at safety because software engineers are not used to the idea that software can injure. All around the industry, the mantel of technical leadership has been passed to people about my age, perhaps a few years older. We grew up when computers weren’t so powerful, when their use was an optimization rather than a necessity, when their first commercial successes were in toys. We don’t think about safety as being a relevant issue for software, and we need to change our perspective on that. But what does it mean for software to be safe? It’s easy to conceptualize how a car could be safe or unsafe. Easy to understand how a medical instrument could be safe of unsafe. But code? I like to think of software safety as being about three levels of concerns. Understanding where what you are building fits on those three levels, will tell you how best to focus your time and attention in a safety conversation. Level 1: Safety as a Synonym for Security For years, the only “safety” software developers thought about was “memory safety.” People will still jump to that conclusion, treating safety as a synonym for security. The connection between memory and safety isn’t just a conflation of terms. Safety is ultimately about preventing a system from reaching dangerous states. In software, the principle clearinghouse of state change is memory. So the first line of defense preventing a program from reaching a dangerous state is controlling what can access its memory and how that memory can be accessed. The need to manage what parts of memory a process can access traces its roots back to mainframe timesharing. Resource management and memory safety started as a straight forward customer service issue (if you’re paying for time and someone else is running something that eats up all the memory, it’s a problem). Computer scientists theorized that these annoyances could be weaponized and they almost immediately were starting with the Morris worm in 1988 and continuing to this day. Level 2: Safety as Predictability But as software grew in complexity and sophistication, state stopped relying on memory so much. A program could be “stateless” by passing state without storing it from one system to another in a transaction. The more common these kinds of designs became the more concurrency bugs and pure functions became part of conversations with software engineers. Undesirable states could be triggered by transactions happening in the wrong order (race conditions), or by depending on other states which were also in flux. Safety at this level of software becomes about predictability (or determinism). The ecosystem that has grown around it includes type systems and formal verification. These were not techniques that were invented the solve safety problems, but the renewed interest from academia and the penetration of such approaches in developer tooling is a direct result of increase awareness of how dangerous unpredictable systems can be. Level 3: Safety as Ergonomics The area of software safety I find the most interesting is also its newest frontier. It’s the unsafe states that can occur when computers become participants in other larger human based systems. Like the issues in level 2, the problems and techniques of level 3 are not new discoveries. Modern technologies — particularly AI — have made them relevant in ways they were not before. Software has always been present in human systems, but there is a difference between being present in the system and being a participant in the system. When you use a database to keep track of inventory, software is present in the system. When you build software that automatically reorders items based on that database, the software is now a participant. The key difference is whether software is recording state or altering state, because safety is ultimately about our likelihood of reaching undesirable states. As software automation becomes more and more prevalent, the issue of how software-initiated state changes can effect other human participants in the process is moving into the forefront of the conversation around technology. Many describe this issue as software ethics, but for me it is a safety issue. Ethics assumes unsafe states are foreseeable. When a design decision causes an airplane to crash, we do not call ethics into question unless someone can demonstrate that the people who made that decision had information that suggested that outcome was a possibility. Otherwise it’s just an accident. So far the tools we have to handle Level 3 are user centric design and problem setting processes. User centric design helps broaden our perspectives so we can see more potential outcomes on different potential interactions. Problem setting forces us to set boundaries with our technology. The same way memory safety restricts what states a program can access, problem setting asks software engineers to treat these powerful technologies as scalpels and restrict what parts of the process they can participate in. Safety as a Scaling Problem Safety, like risk, is ultimately about scale. At each level, problems that were always there take on new significance and the undesirable states that were always possible become more common. We shift our attention and develop new safety tools and techniques to accommodate these different priorities. When a program runs on a single machine with a single user, the most relevant safety issues are how that program affects the state of the computer itself. Concurrency issues are still there; ergonomic issues are still there, but the impact either of those issues have is not as significant. If a single isolated computer has concurrency issues all the data needed to it sort out and determine the correct state is on that single computer. If a single user has a negative outcome from using the program it is unlikely to spread to other users unless the program is shared. When a single computer grows to a network the first level of safety is still important, but the second level of safety becomes more dangerous. State moves from computer to computer and the blast radius of undesirable state changes increases. When digital systems stop merely supporting processes but become load bearing participants of that process, the third level of safety moves to the front of the line. Again the blast radius of undesirable state changes increases, damaging not just machines but causing people to take actions that are misinformed and potentially injurious. The first step in developing a safety practice is to determine what scale what you are building will reach in the immediate future and focus on building out best practices around that level of safety. Then as you scale the technology, continue to evolve your approach to reflect the concerns of higher levels.
https://bellmar.medium.com/the-three-levels-of-software-safety-2097610ada60
['Marianne Bellotti']
2020-12-28 03:32:24.095000+00:00
['Cybersecurity', 'Software Development', 'Safety', 'Distributed Systems', 'Technology']
908
Mister SFC — Contemporary men’s jeweller syncing inventory across their multiple online stores
Background The Mister brand has gone from strength to strength since beginning in 2010. From San Francisco and founded by Tom Do, Mister designs and handcrafts top quality, stylish and contemporary men’s jewellery. Mister has been so popular that it has garnered over 3,000 5 star reviews and has established itself as one of the leading men’s jewellery brands around the world. This demand has led to the need to expand its offering internationally, with Mister SFC launching a Mexico store. Mister SFC create top quality jewelry without the exorbitant price tags. New online store creates new problems Creating a new online store specifically for the Mexico market was exciting but challenging. Aside from adapting the brand and products to the native language and currency, Tom now found himself with two inventories to manage. As the Mexico store started to gain traction, Tom and his team found that they spent hours making sure that the inventory on the Mexico store was reflective of the master inventory held on the US store. Soon enough, this task was becoming impossible to manage, and Tom was losing sales because customers were buying out of stock products. For a new store, this was extra painful as it was expensive to acquire a new customer and then to lose them due to poor customer experience. Because it took so long to update inventory to then still lose customers due to selling out-of-stock inventory, Mister SFC found it difficult to have full stock available on the Mexico store, let alone push any advertising onto it. This was a big issue as the Mexico store could not operate at its full capacity. The need to centralise inventory across multiple online stores Tom needed a solution that could automatically sync his Mexico inventory to the master inventory on his US store. He needed a solution that was affordable, simple, and quick to become familiar with. This was where Syncio came into the picture, and the fact that Syncio provided real-time inventory syncing was another huge benefit. “Opening the Mexico store helped us convert more customers but at the same time increased our workload. With the help of Syncio, we were able to save time and headaches for both us and our customers.” — Tom Do, founder Mister SFC With a setup process that takes minutes to complete, Tom estimates that Syncio has saved countless hours from keeping inventory up-to-date but also using Syncio to import new products onto the Mexico store with a click of a button. In fact, Tom has gone so far as to suggest that without Syncio, he wouldn’t even try to handle inventory across two Shopify stores. The Mister Omega Cuff Bracelet. Benefits Mister SFC is now able to benefit from the scale of multiple channels without compromising on the time and cost of having another store. Customers of the Mexico store can also benefit from reliably purchasing products that are in stock, helping Mister SFC maintain its exceptional 5-star reviews. Crucially, Syncio has helped Mister SFC maintain a small team with a high performance culture. Tom shares a secret to the success of Mister SFC, “We are a small team with big dreams — we dedicate our team to each store and provide incentives when we complete each month without any inventory issues, customer service issues, and meet or beat our goals! A happy team is a happy business!” The sky’s the limit for Mister SFC, they have become a truly international brand without having to compromise on the quality of their product and focus on reliable inventory and customer support. Tips on setting up a new online store reaching a new country
https://medium.com/syncio/mister-sfc-contemporary-mens-jeweller-syncing-inventory-across-their-multiple-online-stores-cb89fbcd2bc2
['Jimmy Zhong']
2019-02-15 00:22:40.609000+00:00
['Retail Technology', 'Inventory Management', 'Dropshipping', 'Ecommerce', 'Marketplaces']
909
Education system and the world’s demand — Are we on right track?
Are we on Right track? Or all the efforts and expense are useless — You have to make your decision. Lots of motivational speakers, entrepreneur are talking on the this hot topic. In fact lots motivational speakers directly ask you to stop going university, According to lots of successful entrepreneurs University is just waste of time and trust me they are not wrong, they are passing this statement after lots of research and achieving some thing in life. This is the time of skills not degree if you are not learning any skill in your university than what is the purpose of paying heavy fees? Pay half of the University fee to someone who can teach you real skills. How much fresh graduates you can find in the market who are equipped with a good skill set? Hardly 10 out of 100, those 10 may be learn things by themselves otherwise you hardly find a fresh graduate with good skills in his own field and trust me market prefer skilled person over a graduate with good marks. People agree this with all of their heart but the society pressure keep them forcing to carry on University studies even when they themselves know they are learning nothing practical in this education system which is based on out dated books. Even the students of degree program which are based on practical skills admit it we are not learning practical things and not getting hands on any skill set. We join this education system to become a successful person in the life. A person can become successful in the life only by getting answer to the following question. How to earn Money? How to grow business? How to find your passion? How to Invest and Handle Money? How to handle the emotions in Stressful conditions? Many more like these…. But unfortunately we are not getting answer to these questions by our education system, According to WHO survey which was held in 2016 more than 222,000 people of age 10 to 29yrs commit suicide in a year. Second most common cause of death in students is suicide due to stress, yes this is on the spot answer to the last question in the above mentioned list you can’t be a successful person even you can’t survive without knowing the answers to the questions. May be there will be few schooling systems in your mind providing answer to the mentioned question but please tell me what percentage of people in our country can afford that schooling system? hardly 20% or may be less than this. An other bad thing, This education system is producing job oriented people, You may find institutes proudly making announcements about their graduates who just got Jobs in a reputed firm. Yes unfortunately our education system feel proud on producing servants. Just check the market, People who talk about startups, People who want you to be an entrepreneur are not degree holder even few of them are college dropouts. This is how things works in the real life. It is difficult to study engineering but a person with MBA and may be with lesser grades will be there on HR post to recruit you when you will go for an interview. They will again ask you for the experience and prefer the experience our the fresh graduate. This circle indirectly tells you no matter how much good grades you have but you don’t have skills and you will get skills by working practically in market with those who know how things works that is what they define as experience. Its all about skills we need to understand it. It is time to take things seriously.A degree without skill set is waste of time and money. Our education system is not really caring about us. this system is nothing more than a business now. This education system is earning money from those who are passed out from this system by teaching their children and preparing them to get job and send their children to school again and this thing is going on and on and we are totally fail to understand that they are making their own feature secure by preparing us to send our children to the school in future. We need to get out of this Education System or at lest change this system so we can avoid this Job oriented mentality. A monkey chose a banana between Money and Banana because He don’t know He can get lots of bananas with that money this is the example founder of AliBaba use to differentiate between business and job. So don’t be a monkey try to understand the difference. Stop chasing job. A country can not produce enough jobs(Government+Private) for the number of passing outs from universities every year.
https://medium.com/@hammadmaqbool/education-system-and-the-worlds-demand-are-we-on-right-track-31acd37042e3
['Hammad Maqbool']
2020-12-10 14:51:25.868000+00:00
['Education Technology', 'Indian Education System', 'Skills Gap', 'Skills Development', 'Education Reform']
910
Empresa anuncia novo headset de realidade aumentada com rastreamento manual
in In Fitness And In Health
https://medium.com/futuro-exponencial/empresa-anuncia-novo-headset-de-realidade-aumentada-com-rastreamento-manual-3714c12780c4
['Futuro Exponencial']
2018-04-11 03:14:58.298000+00:00
['Virtual Reality', 'Augmented Reality', 'Technology']
911
Man against the Machine — Creatives Make or Break you
Promoting mobile apps with display ads is hard. Users mostly ignore your ads, conversion rates are naturally low and you are competing for people’s attention with other apps (and ads). Showing the right ad creative is one of the very few variables that marketers have full control of, yet the importance of a sensible creative strategy is often overlooked. Designing and testing as many creative variations as possible should be the norm, but the process is often limited to just a few options per set (and sometimes there’s a lot of sets — think different countries/products/special dates) as design resources are usually very limited. When we analyzed the Click-Through Rate (CTR) evolution for a creative that has been used for a couple of weeks without getting changed, we found that the CTR almost halved after 5 weeks. But when we introduced new creatives… The impact of the refreshed ads is significant and that is the tip of the iceberg in terms of what you can do with creatives. For starters… an ad is not a single image 👀 An ad is made up of several creative elements: Logo Design Colors Images Copy Font Style Font Size Call to Action Button Color And by using dynamic creatives you can treat each of these elements as an optimization variable. So, are some more important than others? We conducted a series of A/B test experiments to find out: If the change of an individual component of the creative theme had an impact on the user behavior If certain elements had a bigger impact If the difference between standard and customized elements had a significant impact Each experiment changed only one particular component of the creative theme. Experiment 1: Brand Colors We run two sets simultaneously. The two designs were identical (same image, text and text styles) except for the color palette. The main color of the palette (the brand’s most representative color) was used as the main color on the Treatment design. Image for illustration purpose only. The tests were conducted using real brands. Table 1: Brand Color Experiment Results. Source: Internal Jampp Data The results showed that the ads where the brand colors were more prominent had 17.18% better CTR than the control ads. This suggests brand recognition impacts conversions. Though this may also have to do with this brand’s particular palette. If your app’s main color is black, you may find alternative colors work better. Experiment 2: Text Length Image for illustration purpose only. Modifying the copy of the ad whilst keeping all other elements equal seems to have no significant impact on performance. The ads with less copy did performed slightly better though (3.77% higher CTR) Table 2: Text Length Experiment Results. Source: Internal Jampp Data Experiment 3: Images Image for illustration purpose only. Table 3: Image Experiment Results. Source: Internal Jampp Data While CTR was not greatly affected, conversion rate did vary per image. In fact, using different images seems to be what impacts the most in terms of user behavior (more than changing colors, copy, copy length or text styles). We saw up to 5x higher CVR in some cases. The difference is not always that significant but images definitely do have an impact Experiment 4: Design by Vertical After we tested the individual ad components, we decided to compare performance between themes. Did certain designs perform better? Were there any trends per vertical? We tested multiple themes for different verticals across a wide range of advertisers. The tests were performed under a multivariate testing framework, where the only thing that changed between each variant was the theme itself. At the end, the different designs were ranked from highest to lowest according to their respective CTRs in order to pick the ones that performed better for each particular vertical. Some of our customizable themes 😎 Table 4: Theme Experiment Results (Themes can be seen in the previous image). Source: Internal Jampp Data While these results are not conclusive, in the sense that the Circle theme won’t necessarily perform better for every social app out there, they do raise a few interesting points. Follow the Data The “prettiest” ad won’t necessarily be the best performing one. Food delivery apps often have mouth-watering photos of delicious food and yet, in the experiments we conducted the ads that performed better weren’t the ones featuring dishes more prominently. It’s important to test different ads and see what resonates with your users. Don’t Settle Often times advertisers will find a format/design that performs well and “stick to it”. However, as we mentioned at the beginning of the post, all creatives (no matter how awesome) will have a CTR drop over time. It may well be that the Screenshot and Typographic ads performed better than the Circle and Screenable ads on the Food Delivery campaigns because they were more different than the ads that were running before. If you are not creating lots of different ads and updating them often, you are missing out. Images and Colors So, according to the experiments we ran, images and colors are the ad components that have the most significant impact on CTR. Alternatively, font style and font size barely moved the needle. Last but not least… Dynamic Creative Optimization (DCO) a.k.a Let the machine do the hard work When it comes to mobile ads… creativity, design and branding play their part. For sure, but ultimately it’s a numbers game. You want to test, and test often and use the results to create compelling messages. But testing is not for amateurs. An ad has several creative components, when you analyze them in relation to other variables like device characteristics, publisher apps where we find those users, etc and multiply that by the number of impressions… that’s a lot of data… Enter the machine 🤖 With Jampp’s Creatives Lab, we enable the creation of multiple fully customizable ads in line with brand requirements. Creative performance is then analyzed and optimized using machine learning. We use a refined version of Multi Armed Bandits (MAB) which allows us to select the natural winners without sacrificing the exploration process, as you would with a Greedy algorithm. The “thou shall test creatives” is not a new “commandment”, what’s new is what we can test and how we can use those insights. You can read more about our Creatives Lab here, or get in touch!
https://medium.com/jampp/man-against-the-machine-creatives-make-or-break-you-4a369af89ac7
['Franco Passamonte']
2018-07-18 19:23:00.152000+00:00
['Mobile', 'Creative Technology', 'App Marketing', 'Mobile Ads']
912
Thinking of a new way for version control ?
Presenting DOCKER :- Docker is a set of platform as a service (PaaS) products that use OS-level virtualization to deliver software in packages called containers. Containers are isolated from one another and bundle their own software, libraries and configuration files; they can communicate with each other through well-defined channels. All containers are run by a single operating system kernel and therefore use fewer resources than virtual machines. Here are some that you can save and use as a cheat sheet Hope this helps you better understand the Topic (•‿•) Written by: Kunal Makwana
https://medium.com/@kunalm039/thinking-a-new-way-of-version-control-6d14027c76cd
['Kunal Makwana']
2020-12-20 05:00:05.913000+00:00
['Coding', 'Virtualization', 'Information Technology', 'Docker', 'Linux']
913
Why I’m so fond of the iPhone 12 Pro Max
Why I’m so fond of the iPhone 12 Pro Max I hear you, I feel you. For the past couple of years the iPhones Pro and Pro Max have been identical except for the Max part. The waist part. But this year… this 2020 year… the cameras are different and even the size differences… are different. This is why many of us — including me — have a hard time deciding between the iPhone 12 Pro and the iPhone 12 Pro Max. Especially now that pre-orders are less than a week away. So if you are still on the fence. If you prefer the classic Pro size but want that Pro Max camera or battery. If you don’t mind big phones but you’re worried, the new Max is just too big. But if you are also worried that if you don’t understand, you might regret it. If you have plenty of FOMO YOLO, but don’t worry, the Pro Max price is just banned … Then, in the hope that I can help you out anyway, here’s why I personally choose the iPhone 12 Pro Max this year. IPhone 12 Pro Max price: Only $ 100 more … for more The Max is $ 100 more than the Pro. And that’s all. For everything we’ll cover next, the cost differential is only $ 100. Which I understand can be a lot for some people, like 10% more, even on a $ 1000 phone. But with the trade-ins, the installments, the upgrade programs, the difference over the course of a year, let alone several, won’t be that great. Of course never, I mean never, never spend a dime more than you can or should, and if you have a hard limit stick to it absolutely, I’m just saying if you don’t have to. Other concerns, as with everything, the price just isn’t a huge flashing neon stop sign for the Max. Not once you’ve decided to at least go Pro. IPhone 12 Pro Max Size: Bigger Than Ever iPhone 12 Pro, iPhone 11 Pro Max, iPhone 12 Pro Max .caption This size though. I mean, Apple has already made the Pro bigger. It’s a full iPhone XR, like the iPhone 11 now. 5.8 inches to 6.1. So if you go Pro you get big and taller. Not too big, however. The screen has gone from 6.5 inches to 6.5 inches. So this is really the maximum iPhone maxi set that Apple has ever created. But, part of that is a scaled-down bezel. Ok, a little bit of that. It is 0.11 inch taller, but only 0.01 inch wider, and it is also 0.03 inch thinner. Although, yeah, 0.06 ounces more. All of this to say that if you were okay with one of the Plus size iPhones, or if you’ve ever owned or held the iPhone XS or iPhone 11 Max, the iPhone 12 Max isn’t going to do much. difference. In fact, the biggest difference might just be the all-old-is-new-again squared design. Because the loss of the iPhone 6 to the curves of the iPhone 11 era makes the iPhone 12 series more… substantial. Grippier maybe. The edges not only slide in your hand, they squeeze in, which some prefer and others… really don’t. Let me know how you feel in the comments. Anyway, if you’re coming from a smaller iPhone, not even talking about an original 5 or SE, but just a 6 or 10 or even XR or 11, you might want to consider the cut. If you’re coming from a Plus or a Max preview or, damn it, yeah, basically any Android phone on that side of a non-XL Pixel, you’ll probably be fine. IPhone 12 Pro Max screen: more information or more visibility iPhone 6 Plus, iPhone XR, iPhone 12 Pro Max. Colors, Silver, Graphite, 18k Gold and Pacific Blue are the same, same, same, same on both Pros, as is the design, stainless steel antenna bands, matte glass on the rear, ceramic Shied at the front. The Notch. All the same, just like Pro, all checked, on both. Even the screens of the iPhone 12 and iPhone 12 Pro are identical in all respects except for the size. Same OLED, same HDR, same contrast ratio, typical and max brightness levels. Even everything. Except for the size. With the increased size, you get a higher pixel count on the iPhone 12 Pro Max screen. An extra 246 vertically and 114 horizontally, which is like half and the original iPhone, which is ridiculous, but with today’s pixel count and density, it’s not really that big. -thing. And, like, 10 and 6 more than the 11 Max. Probably not even enough to put extra strain on the A14 Bionic chipset inside. However, it’s physically larger, which means you can fit a bit longer on the screen, if you just like IPhone 12 Pro Max battery: Max is max The A14 Bionic chipset is the same in the iPhone 12 Pro and Max, same architecture, same IP. Same Qualcomm X55 modem for low and medium speed FR1 5G in the world and FR2 Highland, mmWave in the US Both have 6GB of RAM, which can keep even the most popular social media apps and games in memory. swollen, much longer than before. And both go up to 512GB of storage if you want to pay the premium. Where they differ is that while the Pro Max has a bigger screen to power and more pixels to move, it has a bigger battery to do so. Bigger enough, Apple rates it for an additional 3 hours of local video playback, an additional hour of video streaming, and an additional 15 hours of audio playback. Both work with Apple’s new MagSafe Magnetic Inductive Charging System and both have a Lightning port and can quickly charge up to 50% in 30 minutes with Apple’s new 20-watt power adapter, but no. included in the box. And yes, still super salty about it. But the bottom line here is, bigger is just more. Especially with the regular Pro taking a bit of a toll on battery life, thanks to the new build and especially 5G, the Max is still a Max-as-in-Max battery for those who want a iPhone lasts as long as an iPhone can. And especially with the new camera, which is why I will personally be buying the iPhone 12 Pro Max this year. It’s a little extra for me, but I just want the best camera, and adding a bigger battery makes the choice for me. You have to do it, however. Let me know what you decide and why in the comments!
https://medium.com/@mzabbrah/why-im-so-fond-of-the-iphone-12-pro-max-41cb5ddf2739
['Brah Tim']
2020-12-14 14:59:07.218000+00:00
['iPhone', 'Tech', 'Iphone 12 Pro', 'Mobile', 'Technology']
914
X-Ray Detectors Industry Insights — Growing Demands 2024
X-Ray Detectors Market Factors such as declining prices and benefits offered by digital detectors, growing public and private investments in Digital Imaging technologies and reimbursement cuts for analog X-rays are driving the growth of the X-ray detectors market. According to research report the global X-Ray Detectors Market is projected to reach $3.8 billion by 2024, at a CAGR of 6.1% during the forecast period. By application, the market is segmented into medical, dental, security, veterinary, and industrial applications. The medical applications segment is expected to grow at the highest rate during the forecast period. The growth in this segment can primarily be attributed to the advancements in medical technology, rising geriatric population, and the increasing number of orthopedic and cardiovascular procedures. Based on type, the market is segmented into flat-panel detectors (FPDs), computed radiography (CR) detectors, charge-coupled device (CCD) detectors, and line-scan detectors. In 2019, the flat-panel detectors segment is expected to account for the largest share of the X-ray detectors market. The growth in this market is mainly driven by the advantages offered by FPD-based portable digital systems (such as high-quality images, faster scanning, increased patient throughput, and multiple storage options), their decreasing prices, and the growing demand for retrofit FPD-based digital X-ray systems. Recent Developments > In May 2017, Varex Imaging Corporation acquired PerkinElmer’s medical imaging business to expand its digital flat-panel detectors business. > In December 2016, Canon Inc. acquired Toshiba Medical Systems Corporation to enhance its position in the healthcare industry. The X-ray detectors market is highly competitive with several big and small players. Prominent players in this market include Varex Imaging Corporation (US), Thales Group (France), Canon, Inc. (Japan), Fujifilm Holdings Corporation (Japan), Agfa-Gevaert Group (Belgium), Carestream Health (US), Vieworks Co., Ltd (South Korea), Hamamatsu Photonics K.K. (Japan), Konica Minolta, Inc. (Japan), Teledyne DALSA Inc. (US), Analogic Corporation (US), Rayence (South Korea), and DRTECH (South Korea). Download PDF Brochure: https://www.marketsandmarkets.com/pdfdownloadNew.asp?id=7004984
https://medium.com/@sara-keller/x-ray-detectors-industry-insights-growing-demands-2024-1643be247a40
['Sara Keller']
2021-09-09 07:54:40.834000+00:00
['X Ray Machine', 'Imaging', 'Radiology', 'Medical Devices', 'Healthcare Technology']
915
7 Essential Non-Technical Skills Every Successful Developer Should Have
7 Essential Non-Technical Skills Every Successful Developer Should Have Becoming great is much more than just writing code As developers, we all would like to get ourselves a well-paid job. We also probably have plans to gradually move up the corporate ladder and reach success. Unfortunately, not everyone manages to reach this goal. Some find it really hard to go up the ladder, while others find it difficult to land a developer job. It is indeed true that you need strong technical skills as a developer to succeed. But it’s also true that it takes more than just technological expertise to rise through the ranks and win the opportunity to command teams, projects, and eventually businesses. By developing and broadening your non-technical skills, you increase the chances of not just landing a “good job” but a great career in technology as well. “The soft skills are the hard skills. People who master the critical leadership skills today are anything but touchy-feely — they’re direct, they’re clear, they’re compassionate, they’re no-nonsense. But they’re not soft.” — Amy Edmondson Here is a list of non-technical skills that will ensure you a great career in tech.
https://medium.com/better-programming/7-essential-non-technical-skills-every-successful-developer-should-have-9308dfa3358e
['Mahdhi Rezvi']
2020-07-05 15:05:19.896000+00:00
['Software Development', 'Technology', 'Productivity', 'Work', 'Programming']
916
Ep. 1 — Connected Everything. Prepping for a Digital Tomorrow with…
Prepping for a Digital Tomorrow with Zafar Razzacki Episode 1. Connected Everything: Prepping for a Digital Tomorrow See bottom of the post for links to your preferred podcast player Podcast Episode 1 Description Everything is becoming a connected device — my car, my speakers, my TV, my coffee machine, my toothbrush — what’s next? My toilet paper? Join us as we talk about human-centered design and connected devices, particularly Autonomous Vehicles, with Zafar Razzacki. Zafar is an expert on the design, development and launch of connected and innovative products. In this conversation, we cover a variety of topics on the future of connected devices. We discuss the future of autonomous vehicles and where we are in the roadmap today. We dive into some critical questions of safety and accountability in the self driving world. We explore the tough meta questions about the impact of connected everything will have on us as humans. How close are we to the Jetsons? Will we use the time we get back from self-driving to have more conversations or mind numbingly absorb more Charlie D’Amelio videos on Tik Tok? Tune in and check it out! Today’s Guest: Zafar Razzacki Zafar has led the design, development, and launch of a variety of innovative products and services including: autonomous vehicles and mobility services; software platforms and digital experiences; and medical devices. His experience spans over 15 years of product, marketing, and strategy roles at companies including Google, General Motors, Accenture, and multiple technology start-ups. He describes himself as a hybrid strategist, designer, and product manager who is obsessive about delivering great experiences through human-centered design and modern product development practices. Interesting Relevant Frameworks: Other podcast players: Google Podcast, Pocket Casts, Anchor, Spotify, Breaker, RadioPublic, Check out Wharton Tech Toks socials: IG: whartontech LinkedIn: https://www.linkedin.com/company/wharton-tech-club/
https://medium.com/@whartontechtoks/podcast-ep-1-connected-everything-9fc235bf81d2
['Wharton Tech Toks']
2020-11-10 16:36:27.091000+00:00
['Product Management', 'Technology', 'Autonomous Vehicles', 'Podcast', 'Strategy']
917
Nightfall InfoSec Roundup: January 6 to January 13
Nightfall InfoSec Roundup: January 6 to January 13 In this week’s edition: A Facebook update bug causes a user data leak. The Citrix ACD vulnerability which has yet to be patched has a proof of concept exploit that hackers are likely already taking advantage of. What to expect from Iranian state hackers or those who might conduct cyberattacks using their name. Read these stories and other timely infosec news below. Cyber Attacks & Breaches Vulnerabilities & Exploits Risks & Warnings Why The Threat Of An Iranian Cyberattack Should Matter To Your Organization ( Mondaq ) January 10th The ongoing Iran-US tensions, and potential for retaliatory cyberattacks, call attention to the need for all organizations to consider whether they are prepared to defend against a cyberattack. Of all the tools Tehran has to retaliate, including its large military, Iranian-backed proxies around the Middle East and robust disinformation operations, international experts believe there is a strong likelihood that Iran will utilize its well-known cyber-warfare capabilities to inflict further damage over time. ( ) The ongoing Iran-US tensions, and potential for retaliatory cyberattacks, call attention to the need for all organizations to consider whether they are prepared to defend against a cyberattack. Of all the tools Tehran has to retaliate, including its large military, Iranian-backed proxies around the Middle East and robust disinformation operations, international experts believe there is a strong likelihood that Iran will utilize its well-known cyber-warfare capabilities to inflict further damage over time. “That’s Where Things Get Really Scary:” Gaming Out an Iranian Cyberattack ( Vanity Fair ) January 9th While several possible scenarios could manifest from the latest global conflict, the big worry in Washington right now isn’t simply what Iran might do, but what other countries, specifically Russia or North Korea or even China, could do and then blame Iran. ( ) While several possible scenarios could manifest from the latest global conflict, the big worry in Washington right now isn’t simply what Iran might do, but what other countries, specifically Russia or North Korea or even China, could do and then blame Iran. These will be the main cybersecurity trends in 2020 ( World Economic Forum ) January 7th Dorit Dor, product VP at Check Point Software Technologies forecasts five major trends for cybersecurity in the coming year. ( ) Dorit Dor, product VP at Check Point Software Technologies forecasts five major trends for cybersecurity in the coming year. Protecting manufacturing from cyber breaches (TechRadar) January 7th Manufacturing has been revolutionized by the development of increasingly sophisticated and connected operational technology (OT). But as with any integration, there are always going to be teething problems. The crucial bump in the road towards Industry 4.0 is cybersecurity. OT systems have rarely been subject to the same upgrade and replacement cycles as their IT systems and connecting OT to the wider network brings with it all of the security risks to which IT has been beholden for decades. Join us next week for the next edition of Nightfall’s newsletter by subscribing here!
https://medium.com/@nightfallai/nightfall-infosec-roundup-january-6-to-january-13-490c2e93d082
['Nightfall Ai']
2020-01-14 18:31:50.729000+00:00
['Cybersecurity', 'Technology News', 'Data Security', 'Infosec', 'Data Loss Prevention']
918
Hey, wait a minute…
Hey, wait a minute… This is supposed to be a video call. I have been on exactly one Zoom call in these 4–6 pandemweeks. LIVIN’ RIGHT https://www.dieselsweeties.com/ics/1010/
https://rstevens.medium.com/hey-wait-a-minute-78839d998644
[]
2020-04-20 02:52:48.318000+00:00
['Social Distance', 'Technology', 'Humor', 'Friendship', 'Comics']
919
Redefine ur Cloud Journey With Terraform and Packer
Redefine ur Cloud Journey With Terraform and Packer For most DevOps professionals, creating a VM usually consists of spinning it up on a cloud using Terraform and then using a config management tool (e.g. Ansible or Puppet) or a bootstrap script (e.g. cloud-init) to convert the raw Virtual Machine to a purposeful server. We all have been doing it for a long time and it works for most cases, but it comes with some drawbacks. I will give you an example from personal experience. We have a horizontally scalable web server running on GCP using managed instance groups (MIG). Whenever the MIG wants to add new nodes to the group, it takes some time for the node to be ready. After the server is up, Ansible takes a few minutes to install packages, do configurations, and make the node ready to process requests. For a production server processing huge volumes, the time lag means dropped requests that hamper the customer experience. To work around this, we had to drop the CPU and memory threshold so that the MIG can spin up a new server before it is too late. That also means that there are too many false positives and we’re wasting valuable infrastructure. https://skatepowerplay.com/rex/Lep-v-Hof-liv-dazn-01.html https://skatepowerplay.com/rex/Lep-v-Hof-liv-dazn-02.html https://skatepowerplay.com/rex/Lep-v-Hof-liv-dazn-03.html https://skatepowerplay.com/rex/Lep-v-Hof-liv-dazn-04.html https://skatepowerplay.com/rex/Lep-v-Hof-liv-dazn-05.html https://skatepowerplay.com/rex/Bay-v-Wol-auf-01.html https://skatepowerplay.com/rex/Bay-v-Wol-auf-02.html https://skatepowerplay.com/rex/Bay-v-Wol-auf-03.html https://skatepowerplay.com/rex/Bay-v-Wol-auf-04.html https://skatepowerplay.com/rex/Bay-v-Wol-auf-05.html https://skatepowerplay.com/rex/Kol-v-Lev-dfb-01.html https://skatepowerplay.com/rex/Kol-v-Lev-dfb-02.html https://skatepowerplay.com/rex/Kol-v-Lev-dfb-03.html https://skatepowerplay.com/rex/Mil-v-Gen-calcio-tv-01.html https://skatepowerplay.com/rex/Mil-v-Gen-calcio-tv-02.html https://skatepowerplay.com/rex/Mil-v-Gen-calcio-tv-03.html https://skatepowerplay.com/rex/Mil-v-Gen-calcio-tv-04.html https://skatepowerplay.com/rex/Mil-v-Gen-calcio-tv-05.html https://skatepowerplay.com/rex/Int-v-Nap-liv-sky8-01.html https://skatepowerplay.com/rex/Int-v-Nap-liv-sky8-02.html https://skatepowerplay.com/rex/Int-v-Nap-liv-sky8-03.html https://skatepowerplay.com/rex/Int-v-Nap-liv-sky8-04.html https://skatepowerplay.com/rex/Int-v-Nap-liv-sky8-05.html https://skatepowerplay.com/rex/Frn-v-Sas-liv-oggi-tv-01.html https://skatepowerplay.com/rex/Frn-v-Sas-liv-oggi-tv-02.html https://skatepowerplay.com/rex/Frn-v-Sas-liv-oggi-tv-03.html https://skatepowerplay.com/rex/Ver-v-Sam-foot-it-01.html https://skatepowerplay.com/rex/Ver-v-Sam-foot-it-02.html https://skatepowerplay.com/rex/Ver-v-Sam-foot-it-03.html https://skatepowerplay.com/rex/v-ideo-Parma-inicio-tv-01.html https://skatepowerplay.com/rex/v-ideo-Parma-inicio-tv-02.html https://skatepowerplay.com/rex/v-ideo-Parma-inicio-tv-03.html https://skatepowerplay.com/rex/v-ideo-Bologna-liv-tyc01.html https://skatepowerplay.com/rex/v-ideo-Bologna-liv-tyc02.html https://skatepowerplay.com/rex/v-ideo-Bologna-liv-tyc03.html https://skatepowerplay.com/rex/Mar-v-Ren-lequipe-fr-01.html https://skatepowerplay.com/rex/Mar-v-Ren-lequipe-fr-02.html https://skatepowerplay.com/rex/Mar-v-Ren-lequipe-fr-03.html https://skatepowerplay.com/rex/Mar-v-Ren-lequipe-fr-04.html https://skatepowerplay.com/rex/Mar-v-Ren-lequipe-fr-05.html https://skatepowerplay.com/rex/Psg-v-Lorient-tv-fr-01.html https://skatepowerplay.com/rex/Psg-v-Lorient-tv-fr-02.html https://skatepowerplay.com/rex/Psg-v-Lorient-tv-fr-03.html https://skatepowerplay.com/rex/Psg-v-Lorient-tv-fr-04.html https://skatepowerplay.com/rex/Psg-v-Lorient-tv-fr-05.html https://skatepowerplay.com/rex/Mon-v-Len-foot-tv-01.html https://skatepowerplay.com/rex/Mon-v-Len-foot-tv-02.html https://skatepowerplay.com/rex/Mon-v-Len-foot-tv-03.html https://skatepowerplay.com/rex/Bor-v-Snt-sur3-fr-01.html https://skatepowerplay.com/rex/Bor-v-Snt-sur3-fr-02.html https://skatepowerplay.com/rex/Bor-v-Snt-sur3-fr-03.html https://skatepowerplay.com/rex/v-ideo-Brest-fcb-01.html https://skatepowerplay.com/rex/v-ideo-Brest-fcb-02.html https://skatepowerplay.com/rex/v-ideo-Brest-fcb-03.html https://skatepowerplay.com/rex/v-ideo-Angers-wcq-01.html https://skatepowerplay.com/rex/v-ideo-Angers-wcq-02.html https://skatepowerplay.com/rex/v-ideo-Angers-wcq-03.html https://phoenixvilleseniorcenter.org/bet1/Lep-v-Hof-liv-dazn-01.html https://phoenixvilleseniorcenter.org/bet1/Lep-v-Hof-liv-dazn-02.html https://phoenixvilleseniorcenter.org/bet1/Lep-v-Hof-liv-dazn-03.html https://phoenixvilleseniorcenter.org/bet1/Lep-v-Hof-liv-dazn-04.html https://phoenixvilleseniorcenter.org/bet1/Lep-v-Hof-liv-dazn-05.html https://phoenixvilleseniorcenter.org/bet1/Bay-v-Wol-auf-01.html https://phoenixvilleseniorcenter.org/bet1/Bay-v-Wol-auf-02.html https://phoenixvilleseniorcenter.org/bet1/Bay-v-Wol-auf-03.html https://phoenixvilleseniorcenter.org/bet1/Bay-v-Wol-auf-04.html https://phoenixvilleseniorcenter.org/bet1/Bay-v-Wol-auf-05.html https://phoenixvilleseniorcenter.org/bet1/Kol-v-Lev-dfb-01.html https://phoenixvilleseniorcenter.org/bet1/Kol-v-Lev-dfb-02.html https://phoenixvilleseniorcenter.org/bet1/Kol-v-Lev-dfb-03.html https://phoenixvilleseniorcenter.org/bet1/Mil-v-Gen-calcio-tv-01.html https://phoenixvilleseniorcenter.org/bet1/Mil-v-Gen-calcio-tv-02.html https://phoenixvilleseniorcenter.org/bet1/Mil-v-Gen-calcio-tv-03.html https://phoenixvilleseniorcenter.org/bet1/Mil-v-Gen-calcio-tv-04.html https://phoenixvilleseniorcenter.org/bet1/Mil-v-Gen-calcio-tv-05.html https://phoenixvilleseniorcenter.org/bet1/Int-v-Nap-liv-sky8-01.html https://phoenixvilleseniorcenter.org/bet1/Int-v-Nap-liv-sky8-02.html https://phoenixvilleseniorcenter.org/bet1/Int-v-Nap-liv-sky8-03.html https://phoenixvilleseniorcenter.org/bet1/Int-v-Nap-liv-sky8-04.html https://phoenixvilleseniorcenter.org/bet1/Int-v-Nap-liv-sky8-05.html https://phoenixvilleseniorcenter.org/bet1/Frn-v-Sas-liv-oggi-tv-01.html https://phoenixvilleseniorcenter.org/bet1/Frn-v-Sas-liv-oggi-tv-02.html https://phoenixvilleseniorcenter.org/bet1/Frn-v-Sas-liv-oggi-tv-03.html https://phoenixvilleseniorcenter.org/bet1/Ver-v-Sam-foot-it-01.html https://phoenixvilleseniorcenter.org/bet1/Ver-v-Sam-foot-it-02.html https://phoenixvilleseniorcenter.org/bet1/Ver-v-Sam-foot-it-03.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Parma-inicio-tv-01.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Parma-inicio-tv-02.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Parma-inicio-tv-03.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Bologna-liv-tyc01.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Bologna-liv-tyc02.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Bologna-liv-tyc03.html https://phoenixvilleseniorcenter.org/bet1/Mar-v-Ren-lequipe-fr-01.html https://phoenixvilleseniorcenter.org/bet1/Mar-v-Ren-lequipe-fr-02.html https://phoenixvilleseniorcenter.org/bet1/Mar-v-Ren-lequipe-fr-03.html https://phoenixvilleseniorcenter.org/bet1/Mar-v-Ren-lequipe-fr-04.html https://phoenixvilleseniorcenter.org/bet1/Mar-v-Ren-lequipe-fr-05.html https://phoenixvilleseniorcenter.org/bet1/Psg-v-Lorient-tv-fr-01.html https://phoenixvilleseniorcenter.org/bet1/Psg-v-Lorient-tv-fr-02.html https://phoenixvilleseniorcenter.org/bet1/Psg-v-Lorient-tv-fr-03.html https://phoenixvilleseniorcenter.org/bet1/Psg-v-Lorient-tv-fr-04.html https://phoenixvilleseniorcenter.org/bet1/Psg-v-Lorient-tv-fr-05.html https://phoenixvilleseniorcenter.org/bet1/Mon-v-Len-foot-tv-01.html https://phoenixvilleseniorcenter.org/bet1/Mon-v-Len-foot-tv-02.html https://phoenixvilleseniorcenter.org/bet1/Mon-v-Len-foot-tv-03.html https://phoenixvilleseniorcenter.org/bet1/Bor-v-Snt-sur3-fr-01.html https://phoenixvilleseniorcenter.org/bet1/Bor-v-Snt-sur3-fr-02.html https://phoenixvilleseniorcenter.org/bet1/Bor-v-Snt-sur3-fr-03.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Brest-fcb-01.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Brest-fcb-02.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Brest-fcb-03.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Angers-wcq-01.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Angers-wcq-02.html https://phoenixvilleseniorcenter.org/bet1/v-ideo-Angers-wcq-03.html https://www.visualcom.it/nuk/Mil-v-Gen-calcio-tv-01.html https://www.visualcom.it/nuk/Mil-v-Gen-calcio-tv-02.html https://www.visualcom.it/nuk/Mil-v-Gen-calcio-tv-03.html https://www.visualcom.it/nuk/Mil-v-Gen-calcio-tv-04.html https://www.visualcom.it/nuk/Mil-v-Gen-calcio-tv-05.html https://www.visualcom.it/nuk/Int-v-Nap-liv-sky8-01.html https://www.visualcom.it/nuk/Int-v-Nap-liv-sky8-02.html https://www.visualcom.it/nuk/Int-v-Nap-liv-sky8-03.html https://www.visualcom.it/nuk/Int-v-Nap-liv-sky8-04.html https://www.visualcom.it/nuk/Int-v-Nap-liv-sky8-05.html https://www.visualcom.it/nuk/Frn-v-Sas-liv-oggi-tv-01.html https://www.visualcom.it/nuk/Frn-v-Sas-liv-oggi-tv-02.html https://www.visualcom.it/nuk/Frn-v-Sas-liv-oggi-tv-03.html https://www.visualcom.it/nuk/Ver-v-Sam-foot-it-01.html https://www.visualcom.it/nuk/Ver-v-Sam-foot-it-02.html https://www.visualcom.it/nuk/Ver-v-Sam-foot-it-03.html https://www.visualcom.it/nuk/v-ideo-Parma-inicio-tv-01.html https://www.visualcom.it/nuk/v-ideo-Parma-inicio-tv-02.html https://www.visualcom.it/nuk/v-ideo-Parma-inicio-tv-03.html https://www.visualcom.it/nuk/v-ideo-Bologna-liv-tyc01.html https://www.visualcom.it/nuk/v-ideo-Bologna-liv-tyc02.html https://www.visualcom.it/nuk/v-ideo-Bologna-liv-tyc03.html https://www.fareva.com/dff/video-m-v-c6fr-201.html https://www.fareva.com/dff/video-m-v-c6fr-202.html https://www.fareva.com/dff/video-m-v-c6fr-203.html https://www.fareva.com/dff/video-m-v-b-fr01.html https://www.fareva.com/dff/video-m-v-b-fr02.html https://www.fareva.com/dff/video-m-v-b-fr03.html https://www.fareva.com/dff/video-m-v-c2-0-1.html https://www.fareva.com/dff/video-m-v-c2-0-2.html https://www.fareva.com/dff/video-m-v-c2-0-3.html https://www.fareva.com/dff/video-m-v-c7-paris201.html https://www.fareva.com/dff/video-m-v-c7-paris202.html https://www.fareva.com/dff/video-m-v-c7-paris203.html https://www.fareva.com/dff/video-m-v-c7-paris204.html https://www.fareva.com/dff/video-m-v-c7-paris205.html https://www.fareva.com/dff/Video-rennes-v-OM-fr-1xdz-01.html https://www.fareva.com/dff/Video-rennes-v-OM-fr-1xdz-02.html https://www.fareva.com/dff/Video-rennes-v-OM-fr-1xdz-03.html https://www.fareva.com/dff/Video-rennes-v-OM-fr-1xdz-04.html https://www.fareva.com/dff/Video-rennes-v-OM-fr-1xdz-05.html https://www.fareva.com/dff/Livrpool-v-totten-epl01.html https://www.fareva.com/dff/Livrpool-v-totten-epl02.html https://www.fareva.com/dff/Livrpool-v-totten-epl03.html https://www.fareva.com/dff/Livrpool-v-totten-epl04.html https://www.fareva.com/dff/Livrpool-v-totten-epl05.html https://www.fareva.com/dff/Video-united-v-crystal-201.html https://www.fareva.com/dff/Video-united-v-crystal-202.html https://www.fareva.com/dff/Video-united-v-crystal-203.html https://www.fareva.com/dff/Fulham-v-Brighton-hq001.html https://www.fareva.com/dff/Fulham-v-Brighton-hq002.html https://www.fareva.com/dff/Fulham-v-Brighton-hq003.html http://trob.be/mrg/video-d-v-be-y0-01.html http://trob.be/mrg/video-d-v-be-y0-02.html http://trob.be/mrg/video-d-v-be-y0-03.html http://trob.be/mrg/video-d-v-be-y0-04.html http://trob.be/mrg/video-d-v-be-y0-05.html http://trob.be/mrg/KAA-Gent-Waasland-Beveren-v-en-gb201.html http://trob.be/mrg/KAA-Gent-Waasland-Beveren-v-en-gb202.html http://trob.be/mrg/KAA-Gent-Waasland-Beveren-v-en-gb203.html http://trob.be/mrg/KAA-Gent-Waasland-Beveren-v-en-gb204.html http://trob.be/mrg/KAA-Gent-Waasland-Beveren-v-en-gb205.html https://www.rcc.org.uk/vmr/Livrpool-v-totten-epl01.html https://www.rcc.org.uk/vmr/Livrpool-v-totten-epl02.html https://www.rcc.org.uk/vmr/Livrpool-v-totten-epl03.html https://www.rcc.org.uk/vmr/Livrpool-v-totten-epl04.html https://www.rcc.org.uk/vmr/Livrpool-v-totten-epl05.html https://www.rcc.org.uk/vmr/Video-united-v-crystal-201.html https://www.rcc.org.uk/vmr/Video-united-v-crystal-202.html https://www.rcc.org.uk/vmr/Video-united-v-crystal-203.html https://www.rcc.org.uk/vmr/Fulham-v-Brighton-hq001.html https://www.rcc.org.uk/vmr/Fulham-v-Brighton-hq002.html https://www.rcc.org.uk/vmr/Fulham-v-Brighton-hq003.html https://www.msae.org/gnj/Video-Barca-v-Real-hd01.html https://www.msae.org/gnj/Video-Barca-v-Real-hd02.html https://www.msae.org/gnj/Video-Barca-v-Real-hd03.html https://www.msae.org/gnj/Video-Barca-v-Real-hd04.html https://www.msae.org/gnj/Video-Barca-v-Real-hd05.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c6fr-201.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c6fr-202.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c6fr-203.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-b-fr01.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-b-fr02.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-b-fr03.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c2-0-1.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c2-0-2.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c2-0-3.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c7-paris201.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c7-paris202.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c7-paris203.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c7-paris204.html https://phoenixvilleseniorcenter.org/vmw/video-m-v-c7-paris205.html https://phoenixvilleseniorcenter.org/vmw/Video-rennes-v-OM-fr-1xdz-01.html https://phoenixvilleseniorcenter.org/vmw/Video-rennes-v-OM-fr-1xdz-02.html https://phoenixvilleseniorcenter.org/vmw/Video-rennes-v-OM-fr-1xdz-03.html https://phoenixvilleseniorcenter.org/vmw/Video-rennes-v-OM-fr-1xdz-04.html https://phoenixvilleseniorcenter.org/vmw/Video-rennes-v-OM-fr-1xdz-05.html https://phoenixvilleseniorcenter.org/vmw/Livrpool-v-totten-epl01.html https://phoenixvilleseniorcenter.org/vmw/Livrpool-v-totten-epl02.html https://phoenixvilleseniorcenter.org/vmw/Livrpool-v-totten-epl03.html https://phoenixvilleseniorcenter.org/vmw/Livrpool-v-totten-epl04.html https://phoenixvilleseniorcenter.org/vmw/Livrpool-v-totten-epl05.html https://phoenixvilleseniorcenter.org/vmw/Video-united-v-crystal-201.html https://phoenixvilleseniorcenter.org/vmw/Video-united-v-crystal-202.html https://phoenixvilleseniorcenter.org/vmw/Video-united-v-crystal-203.html https://phoenixvilleseniorcenter.org/vmw/Fulham-v-Brighton-hq001.html https://phoenixvilleseniorcenter.org/vmw/Fulham-v-Brighton-hq002.html https://phoenixvilleseniorcenter.org/vmw/Fulham-v-Brighton-hq003.html https://phoenixvilleseniorcenter.org/vmw/video-d-v-be-y0-01.html https://phoenixvilleseniorcenter.org/vmw/video-d-v-be-y0-02.html https://phoenixvilleseniorcenter.org/vmw/video-d-v-be-y0-03.html https://phoenixvilleseniorcenter.org/vmw/video-d-v-be-y0-04.html https://phoenixvilleseniorcenter.org/vmw/video-d-v-be-y0-05.html https://phoenixvilleseniorcenter.org/vmw/KAA-Gent-Waasland-Beveren-v-en-gb201.html https://phoenixvilleseniorcenter.org/vmw/KAA-Gent-Waasland-Beveren-v-en-gb202.html https://phoenixvilleseniorcenter.org/vmw/KAA-Gent-Waasland-Beveren-v-en-gb203.html https://phoenixvilleseniorcenter.org/vmw/KAA-Gent-Waasland-Beveren-v-en-gb204.html https://phoenixvilleseniorcenter.org/vmw/KAA-Gent-Waasland-Beveren-v-en-gb205.html Also, consider the amount it would take to patch every server and ensure that the patches would be right on every node. The complexity and risk increase with the number of nodes. Well, we figured out that we were doing it wrong and there is a better way: by using Immutable Infrastructure. Now, what do we mean by this term? Well, it means that instead of spinning up infrastructure and configuring it after it is ready, we can pre-bake all configuration within the OS image so that when the machine is up, it is ready. It also means that you are not supposed to configure it any further, as the next time a copy of the machine is launched, it will not have your settings — hence the immutability. It’s the concept of containers within the VM space. While this concept is not new, there were challenges with the approach because there weren’t tools available then. Luckily, with the modern DevOps and CI/CD practices, we can easily bake a new image and release it when we need to. You can also run tests on the new image before you decide to release it. It helps in reducing a lot of pain that you might incur in production. Rolling back a change is easy as well. Simply deploy the old image. You can do canary deployments with this approach if you need to and do A/B testing with ease. HashiCorp has an impressive DevOps stack that relates to both managing infrastructure and server configuration. While HashiCorp’s Terraform is the leading multi-cloud Infrastructure as Code solution, HashiCorp Packer provides you with the power to create Immutable Infrastructure. Packer helps you take OS images from a base, run it in a temporary VM, customise your image by running shell scripts or even Ansible playbooks, test the configuration, build the image, and push it to your image repository. So next time, when you want to release a new configuration, you need to update your instance template. In this article, we will have a look at some of these features in the hands-on demo. As we work primarily on GCP, we will use the GCP example, but the principle is the same and the tool supports any public cloud platform.
https://medium.com/@henry-3238/redefine-ur-cloud-journey-with-terraform-and-packer-d9d4ade89990
[]
2020-12-16 19:11:59.189000+00:00
['Programming', 'Technology', 'DevOps', 'Terraform']
920
How do I log in to my NETGEAR home router?
No matter where you reside or whatever you do, one thing is sure. Everyone demands good Wi-Fi! Nobody wants to undergo the frustration of buffering or slow down internet speed while trying online streaming, gaming, or getting some work done. To own excellent Wi-Fi, you will require a steady and secure Wi-Fi router! With Netgear being the Wi-Fi experts and as a market leader for over two decades, you will get the most inventive and latest technology solutions to do Netgear router troubleshooting for obtaining the Wi-Fi connectivity problems solutions in your home. NETGEAR Router Setup Netgear Router is the modern development in the legacy of intelligent and creative routers. It gives an extensive array of speeds and attributes to satisfy your home requirements for internet, music and video streaming, and other services. It is inbuilt with dual bands that keep you connected and virtually resistant to obstruction from other instruments. The Netgear Genie app developed by Netgear makes the Netgear router setup and Netgear router login processes easier by offering functions like Parental Controls (filters unwanted web content & block specific sites) and Guest Access (allows a public wireless guest network for users). Netgear Router Setup Is it troublesome to set up your Netgear Wi-Fi router? Consider these steps to perform a trouble-free Netgear router setup with the smart wizard interface. For the setup, the user should run the NETGEAR Setup Wizard, which supervises the user about the setup procedure. The Netgear router setup installation time for most of the routers will not be more than 20 minutes. Steps to do Netgear router setup – The Netgear router setup process is possible in two ways, that is, Netgear router wired setup and Netgear router wireless setup. Netgear Router Wired Setup — If you prefer to do Netgear router wired setup, then obey the below-mentioned steps: From your DSL wire modem, connect the Ethernet cables with the WAN port located at the back of your Netgear router. Catch an additional Ethernet wire and attach one side with one of the LAN ports of the router and the other side with your devices like a laptop or PC. Switch off your modem & router for approximately 2 to 3 minutes, then again power them on and wait till all the lights on your router get stable. Enter the www.routerlogin.net web domain by launching any internet browser of your choice, and in the browser’s address bar, type the default login credentials. The default login details, i.e., the username and the default password is admin and password, respectively. Obey all the instructions along with the connection of the internet service provider for doing Netgear router setup well. Execute the setup process and download the brand-new firmware versions available for your Netgear router. 2. Netgear Router Wireless Setup: If you like to perform Netgear router wireless setup, then you need to follow the below-mentioned steps: Having the support of your DSL modem, plug in the Ethernet cable to the WAN port of Netgear router Wi-Fi. Switch ON your Netgear router and DSL modem router and wait for the lights fixed on your Netgear router to become steady. Note the SSID & Network key shown on the label beneath your wireless Netgear router. On your computer window, hit the wireless icon to switch it to off mode and associate the PC with Wi-Fi names of the Netgear router. On your PC, open any favored web browser and fill in the web address www.routerlogin.net or www.routerlogin.com in the address bar of your internet browser. Start the Netgear login page of the Netgear Genie with the help of default login credentials, that is, the username and password. Steps to do Netgear router login – The Netgear router login process can also be feasible in two methods, either with the help of the default web address, that is, routerlogin.net, or through the Netgear router application. 1. Netgear router login via routerlogin.net: To execute Netgear router login using the web address routerlogin.net, follow the steps mentioned below: Introduce any preferred web browser of your choice on your device attached with your Netgear router. Then, type in the web domains http://www.routerlogin.net or http://www.routerlogin.com into your opened internet browser’s address bar. Reminder: You can likewise fill in your router’s default IP address that is 192.168.1.1 or 192.168.0.1. Tap on the Enter button or click on the Search button. In the appearing Netgear router login page, fill in your default user name and password if you haven’t changed it yet. The Netgear router default username and password are admin and password, respectively. Consider: Both of the above-specified default login entries are case-sensitive. Hit on the OK or Log In button to reach the Netgear router login page and the BASIC Home screen displays on the computer screen. 2. Netgear router login via Netgear App: Apart from logging with the official web domain routerlogin.net and router IP login method, you can further perform the login process of your Wi-Fi router via the Netgear Mobile application. For this, you will have to download a proper application for your Netgear router. Following are the steps for Netgear Wi-Fi router login through the Netgear mobile app: Open Google Play Store or App Store according to the Operating System of your mobile phone. Download the application from the online store available for your Netgear router. When the app has downloaded, launch the application. Next, you will observe the Netgear login page or sign-in for your Netgear router. Then, enter the default username and password for your Netgear router. Lastly, click on the Login button to access the Netgear login page. Are you still facing trouble in performing the Netgear router setup and Netgear router login? Refer to the topic Netgear router troubleshooting steps or contact us at (801)–8903–242.
https://medium.com/@jessicasmith0/how-do-i-log-in-to-my-netgear-home-router-4942b0c91e00
['Jessica Smith']
2020-12-07 09:24:22.859000+00:00
['Computers', 'Information Technology', 'Router', 'Support', 'Hardware']
921
10 Lesser-Known Python Libraries for Machine Learning
1. Pandas-ml This library integrates the pandas, Scikit-learn, XGBoost and Matplotlib functionality together to simplify data preparation and model scoring. In terms of data handling, pandas-ml uses a data format called ModelFrame which contains metadata including information about the features and target so that statistics and model functions can be more easily applied. As ModelFrame inherits all the features of the pandas.DataFrame all pandas methods can be directly applied to your machine learning data. 2. Category encoders The category encoders library simplifies the handling of categorical variables in machine learning. It can be used alone to transform variables but is also integrated with Scikit-learn and can be used within Sckit-learn pipelines. Although Scikit-learn has some functions to convert categorical features to numeric such as one-hot encoding and ordinal encoders, category encoders provide a much wider range of methods for handling these data types. In particular, it includes a number of methods for handling high cardinality features (features with a large number of unique values), such as the weight of evidence transformer. 3. Yellowbrick Yellowbrick is a visualisation library specifically designed for Scikit-learn developed machine learning models. This library provides a wide range of simple to use visualisations that aid various aspects of the machine learning process including model selection, feature extraction and model evaluation and interpretation. 4. Shap The Shap library uses a technique based on game theory to provide explanations for the output of any machine learning model. It can provide interpretability for models developed using most popular python machine learning libraries including Scikit-learn, XGBoost, Pyspark, Tensorflow and Keras. 5. Feature-engine Feature engine is an open-source python library designed to make a wide range of feature engineering techniques easily available. Feature engineering generally covers the following steps: Imputing missing values Outlier removal Encoding categorical variables Discretization and normalisation Engineering new features The feature-engine library contains functions and methods to carry out most of these tasks. The code follows the Scikit-learn functionality with fit() and transform() methods and can be used within Scikit-learn pipelines. 6. Feature tools Feature tools is a Python framework for automated feature engineering. This library takes either a single data set or a set of relational data sets and runs Deep Feature Synthesis (DFS) to create a matrix of both existing and newly generated features. This tool can save considerable time during the feature engineering process. 7. Dabl The Dabl package aims to automate some of the common repetitive machine learning tasks such as data cleaning and basic analysis. Dabl uses a “best guess” philosophy to apply data cleaning processes but allows the user to inspect and amend the processes if necessary. I previously wrote a more comprehensive guide to Dabl in this post last year. 8. Suprise This library is designed for simple implementation of explicit recommendation engines in Python. It has an interface which is very similar to scikit-learn so is very intuitive if you are already a user of that library. It has a wide range of built-in algorithms you can evaluate your data on but you can also build your own. There are also tools for cross-validation and hyperparameter optimisation. 9. Pycaret Pycaret is designed to be an extremely low code machine learning library for Python. It is aimed at both data scientists who want to build models more quickly and also non-data scientists who are interested in building simple models. The library contains low code examples for the entire machine learning process including preprocessing, model training, evaluation and tuning. All of the common estimator types are included such as logistic regression, decision trees, gradient boosting classifier and cat boost. This library also contains a very simple deployment solution that will deploy the final model on an AWS S3 bucket. The model can always be saved as a pickle file for alternative deployment solutions. 10. Prophet Prophet is a python library designed to greatly simplify time series forecasting open-sourced by Facebook. This library uses a similar interface to Scikit-learn and enormously simplifies the process of time series forecasting. There are some useful plotting functionalities included to visualise and evaluate the models. It is also very easy to model in seasonality and holiday effects including your own custom date periods.
https://towardsdatascience.com/10-lesser-known-python-libraries-for-machine-learning-fca7ad32e53c
['Rebecca Vickery']
2020-05-27 07:27:28.805000+00:00
['Technology', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Programming']
922
So You Wanna Be a Manager…
Photo by Matteo Vistocco on Unsplash If we want people to fully show up, to bring their whole selves including their unarmored, whole hearts — so that we can innovate, solve problems, and serve people — we have to be vigilant about creating a culture in which people feel safe, seen, heard, and respected. — Brene Brown When someone tells me they are interested in becoming a manager, I am simultaneously excited and skeptical. I want to make sure they really understand their motives and what they’re getting into. Once I know they’re not just in it for the title, I work with them to assess their current strengths, growth opportunities, and management maturity. Some of the things I look for: How self-aware are they? Do they see themselves accurately? How well do they handle conflict? Are they able to work with others? Can they delegate and teach? Do they have realistic expectations of what it means to manage? Management and leadership is a never-ending adventure that requires ongoing work to do well. Everyone has room to grow. I just want to understand our starting point. Depending on where that is, here are some skill areas and corresponding resources I typically recommend over time as a learning path, adjusting to their specific situation. Know Thyself Leaders must tackle their own issues before they can effectively help others. To help with this, I usually recommend: Dare to Lead by Brene Brown. It is a great resource on how to better understand yourself, work through why you want to be a leader, and how to lead from the heart. Visionary/Operator/Processor/Synergyst Testing by Predictable Success. I’ve found this helps leaders understand their own styles and how they may complement or conflict with their team. Talk therapy. Many people should take some time to work with a trained therapist to reach a healthy level of self-awareness and address ongoing issues. The more they understand themselves and how they react to situations, the more constructively they can handle even the toughest conflict and stress. Developing a healthy relationship with the team starts with having a healthy relationship with oneself. It will be a tough and worthwhile journey into leadership. Arm yourself with healthy coping methods and self-awareness for the bumps along the way. Get Ready for Hard Conversations You’ll be thankful for the work you’ve done on yourself as you navigate challenging conversations filled with emotion. You’ll need to be prepared to talk about where someone fits within the team and the organization, salaries, interpersonal relationships, overconsumption of shared resources, and more. In addition to helping my managers learn to understand themselves, here are some things I recommend: Crucial Conversations training from VitalSmarts. They’ll learn how to work through challenging discussions, de-escalate tension, and more skills that I continually practice. Manager Tools podcast. This gem is full of practical advice that is easy to apply. I learned how to do much more effective 1:1s, a key part of managing effectively. Having good, hard conversations takes practice and reflection. You will mess up, and you have to learn to take every opportunity to have those conversations, keep a cool head, and strive to do better in the next one. And the next one. And the next one… Prioritize and Align It is impossible to do everything well. Your team will work on whatever you ask them to do, so you need to be smart about what those things are. You also need to help them coordinate efforts to ensure you’re all moving in the same direction. To help, I recommend: Essentialism by Greg McKeown. Reading this helped me appreciate just how important it is to identify the most important work and give it adequate focus. Now I work with my team to do less but better. Product Roadmaps Relaunched by C. Todd Lombardo. This book provides guidance on how to define these oriented around outcomes vs. things that will be created. Measuring the work’s impact leaves the space to adapt direction as you learn more. Defining a vision and a strategy as a decision framework will help the team know where they’re all going, how their work is helping you get there, and what they shouldn’t be working on right now. We shoot to only take on what we are confident we can do in a typical workweek without feeling overwhelmed. It allows us to take the time to do our work well and connect with our customers. We also leave the space to adapt to new information as it comes in through experiments, feedback, and more. Learn to Let Go While you double-down on tough conversations and prioritization for your team, you’ll simultaneously need to learn to let go of doing everything yourself. I have seen many people struggle to understand and embrace the difference. To help with this mindset shift, I recommend: The Leadership Pipeline by Stephen Drotter. The book details the work involved in moving between levels, including transitioning into management and to managing managers. Each change requires new skill sets and temperaments, and a deep understanding of motives for making the move. Turn the Ship Around! by L. David Marquet. This book inspired me and has shaped my leadership style. In the book, the author describes how, even in the most top-down cultures (the Navy), it is possible to try new approaches that give people at every level of the organization autonomy and corresponding authority. Management is about setting direction through clear expected outcomes and providing them the support to help your team make them happen. It is not about telling people what to do. You are there to help them grow and to think for themselves. If you do it right, you’ll as much learn from them as they learn from you. Your contributions will change from when you were an individual contributor. To succeed, what you value will need to change, too. Be Honest with Yourself Last but not least, I encourage people to really reflect on why they want to manage. As I said before, it’s not about being the one that tells others what to do. It’s also not about the bigger office or access to perks. You have people’s careers in your hands, so you need to understand all that is involved and respect the magnitude of your impact. If you are not excited about helping others unlock their potential and dealing with the messiness of humans — it is not for you. And that’s perfectly okay. If you derive joy from supporting others in their accomplishments and giving them all of the credit, you thrive in tough conversations, you never want to stop learning, and you deeply appreciate people as people, go for it. Take your time, work through the transition, and ask for help along the way.
https://betsyboehmbland.medium.com/so-you-wanna-be-a-manager-b0436b8354d3
['Betsy Boehm Bland']
2019-02-09 12:05:09.195000+00:00
['Management And Leadership', 'Advice', 'Technology', 'Learning', 'Leadership']
923
Tezos - The Path Forward
We’re going to start off with that which gets the least amount of criticism — the average, vocal, Tezos community member; there is some strange tendency for the Tezos community to self-sabotage and engage in entirely unproductive actions. I’d reason that those who are most critical of Tezos are those within the Tezos community itself. This is especially the case on Reddit, which may also hint towards the need to reevaluate the current structure of r/Tezos. Presumably, people are critical of Tezos because they want Tezos to thrive — but at some point it becomes too much. Why is it that the Tezos subreddit is filled with posts and comments about how marketing needs to take advantage of influencers? Merely two months ago, the Tezos Foundation brought Huge, a highly experienced marketing firm that has worked with massive companies, on board as Agency of Record for the foundation. Only about a week before the Huge announcement, there was the announcement that a new Marketing and Communications organization named Blokhaus was coming to the Tezos ecosystem. Do we really think that neither of these organizations have considered using influencers? In merely a few months, we have already seen Tezos-related marketing appear with huge partnerships and a full video ad campaign. Maybe we can stop spamming the subreddit with “where are the influencers?” posts and comments. Maybe we can shift to actually being productive with our time. Oh, but this is far from a recent phenomenon. The Tezos community has a history of negativity regarding marketing — and the progress of marketing initiatives does nothing to halt it. All you need to do is search the keyword “marketing” on the Tezos subreddit to see the large amount of posts complaining about marketing. And it’s not just marketing. The Tezos community will jump at anything remotely imperfect. Let’s look at the curious case of reddit user u/No-Metal-6762 who joined Reddit in February 2021. Despite claiming to be an “ICO participant,” 80% of the content he’s posted on Reddit is just trashing on the Tezos Foundation and Arthur Breitman. Instead of taking responsibility as a community member and attempting to assist with spreading the word of Tezos, we get posts complaining that Tezos isn’t increasing in price during the bull run, spamming the subreddit with negative comments, and spamming posts asking “who can the community point to and blame?” (post deleted, but the comments are still available) across Tezos-related subreddits. My favorite post of all-time was definitely this one that practically begged Arthur to respond while also including this golden line addressed to him, “You are a more technical and intellectual personality, marketing is not your primary strength. However great leaders know their strengths as well as weaknesses and address them.” Imagine being a random, anonymous person on the internet and lecturing someone, who was instrumental in building a revolutionary blockchain, about his “strengths and weaknesses” while, most likely, contributing nothing of value to Tezos yourself. Sure, no person is perfect — but comments like these are insulting and unproductive. There’s also a somewhat common trend in all these threads — there’s a lot of negativity that is a simply a result of the community’s own ignorance. For example, look at this exchange under a post asking about Tezos marketing efforts: User 1: “this was a pretty good one” User 2: “But that was a decision by a 3rd party. Tezos didn’t pay them to do it to get more visibility” User 1: “it was two parts. the VC firm wants more Tezos action, and the TF is investing in the VC firm.” User 2: “Thank you for the insight!” Now, this was one of the more simple and civil exchanges but it captures the overall point —far more goes into marketing behind the scenes than the community cares to realize. Let’s compare this to the mentality of other communities. Algorand had the same concerns about marketing but the community handled it completely differently. Many posts have actually called out the community’s criticisms of marketing and they have pointed out that the marketing is fine. I’ve been critical of the Tezos Foundation’s marketing initiatives in the past. However, things have changed. Is it possible that we can let the professionals work for a bit as we shift our focus to things that are actually productive? There are plenty of better uses for our time. Look at things this way: There are currently over 51,000 members of the Tezos subreddit. If every member of the subreddit made a single comment a year on r/cryptocurrency, then, distributed evenly, that would be over 140 Tezos-related comments a day. If only half the subreddit members made 1 comment a year about Tezos on r/cryptocurrency then that would still be 70 comments a day. In order to have 1 medium article a day for an entire year we merely need 365 Tezos community members to write a single article over the course of 365 days. That’s less than 1% of the Tezos community writing a single article a year. Actually, people don’t even need to write medium articles — just post on r/cryptocurrency itself and hype up Tezos as much as possible. Look at this post written by someone who openly admits, “I’m new to all stuff about crypto” getting over 671 upvotes and over 426 comments. Is someone going to tell me that all these people concerned about marketing can’t find the time to make a few comments in r/cryptocurrency? Or make a single medium article, or a single r/cryptocurrency post, hyping up Tezos? Is someone going to tell me that a post instructing a professional marketing agency to use influencers (while they’re already, most likely, aware of such things) is more important than getting hundreds, if not thousands, of people aware of Tezos on social media with relatively low effort? Please, do the logical analysis on that and consider re-configuring your priorities here.
https://medium.com/@mutsuraboshi/tezos-the-path-forward-d262b8d07664
[]
2021-08-20 15:49:02.039000+00:00
['Blockchain', 'Tezos', 'Cryptocurrency', 'Community', 'Technology']
924
Difference Between Data Science & Business Analytics
Data Science and Business Analytics, often used interchangeably, are very different domains. A layman would probably be least bothered with this interchangeability, but professionals need to use these terms correctly as the impact on the business is large and direct. In this article, we will elaborate on the difference between the two. Overview Data Science and Business Analytics are unique fields, with the biggest difference being the scope of the problems addressed. Simply put, The science of data that uses algorithms, statistics, and technology is known as Data Science. It provides actionable insights on a range of structured and unstructured data solving a broader perspective such as customer behavior. On the other hand, the statistical study of mostly structured business data is known as Business Analytics. It provides solutions to specific business problems and roadblocks. These two terms are interchangeably used in either of the above scenarios, i.e., a business analytics problem could be wrongly addressed to be solved with the help of Data Science. The implications of carelessly using the term ‘Data Science’ in this context could be adverse because the tools and techniques used in Business Analytics are different than Data Science and using wrong tools to assess a data set will yield imperfect and undesirable results. Data Science is an umbrella term for all things dedicated to mining large data sets. An intersection of programming, statistics, and data analytics, Data Science is not limited to only statistical or algorithmic aspects. Business Analytics is the end-product of data science. It includes two broad categories, that are Statistical Analysis and Business Intelligence. Business Intelligence Another term often confused with Data Science is Business Intelligence. It is also an umbrella term that portrays ideas and strategies to improve decision making by utilizing fact-based support systems. Modern Business Intelligence is much beyond just business reporting. It is a mature framework that encompasses intuitive dashboards, mobile analytics, what-if planning, etc. It additionally incorporates enormous back-end machinery for maintaining control around reporting. Although it sounds similar to Data Science, it is not. The principal difference lies in the type of problems that they address. Business Intelligence deduces the new unknown values of previously known elements using a formula that is already available. On the other hand, Data Science works with unknown scenarios without any formula or algorithm in hand, to solve data queries that nobody has ever answered in the past. Data Science problems are solved by exploring data, finding the best method, building a model around it, and finally operationalizing the model. Conclusion Business Intelligence is well established with deep roots in a typical corporate landscape. Corporate professionals are familiar, comfortable, and confident with the BI concepts and framework. As BI projects work on known unknowns, the projects can be planned well in advance and timelines could be efficiently followed. Also, there is minimal trial and error with several successful BI projects in a company’s kitty, who would have developed good project expertise over the years. There is a massive career scope in the fields of Business Intelligence and Business Analytics. Professionals who are genuinely thinking of making a shift in the BA and BI roles can consider upskilling with the right course. Great Learning’s PG program in Business Analytics and Business Intelligence helps working professionals make a smooth and successful transition. The course offers the choice of online or classroom-based learning with Dual Certificate from the University of Texas at Austin, McCombs School of Business (world rank #2 in Analytics), and Great Lakes (India rank #1 in Analytics). It helps you with hands-on practical learning with case studies and projects, without the need of quitting your job. The course is also tailor-made keeping in mind the professionals from the non-IT background. With our career guidance and support, you can easily land your dream job in Business Intelligence and Business Analytics. Originally published at https://www.greatlearning.in on August 28, 2019.
https://medium.com/my-great-learning/difference-between-data-science-and-business-analytics-8d7dc1f4ff15
['Great Learning']
2019-09-17 10:50:16.792000+00:00
['Data Science', 'Technology', 'Data Analysis', 'Science', 'Business']
925
An Introduction to Swift
Collections Two of the primary collection types in Swift are Arrays and Dictionaries. Array Arrays store data of the same type in an ordered list. The following code creates an empty Array which stores Strings. var listOfStrings = [String]() If we want to let Swift infer the type of our Array, we can create it and populate it with some entries as follows: var someStrings = ["Wallace", "Gromit", "Cheddar"] The items inside of an Array are accessed via indexes. The index represents its position in the array. We always start the count from 0. var someStrings = ["Wallace", "Gromit", "Cheddar"] print(someStrings[1]) The above code outputs "Gromit" . Dictionary Dictionaries store unordered key-value pairs, where all keys are of the same type and all values are of the same type. We can create an empty dictionary which maps from String keys to Int values as follows: var romanToDecimal = [String: Int]() And now to create some mappings we can do the following: romanToDecimal["X"] = 10 romanToDecimal["V"] = 5 A Note on Performance Dictionaries provide instant look-up time. If we want to check if a key is present in a dictionary, all we do is check if there is a mapping for the key to a value. In big O notation, a notation used to describe the worst-case runtime of a piece of code, we would write this as O(1). If we want to check if a value is present in an array, we need to cycle through the array and examine each element individually. In big O, it is written as O(n). This is to say that our runtime is bounded by the number of elements (n) in the array. Mutability Assigning a collection to a variable means that the collection can be changed later on. var someList = [1,2,3] someList = [1] If instead, the collection is assigned to a constant, the collection and its contents cannot be changed.
https://medium.com/the-dev-caf%C3%A9/an-introduction-to-swift-9f75fe193eec
[]
2020-06-22 16:55:03.451000+00:00
['Programming', 'Coding', 'iOS', 'Technology', 'Swift']
926
Un-Gendering the City: How Can Designers Design An Infrastructure of Safety?
A look at gendered urban inequality “Belonging-ness and identity are as intrinsic to urbanity as autonomy and anonymity.” (Miranne, Young) Cities offer the opportunity of anonymity - a sense of “social escape”. This veneer of anonymity intrinsically contributes to the feeling of security we feel when we move through an urban public realm. In many cities, there’s an inequality in this experience of anonymity, particularly through the lens of gender. The absence of anonymity causes body language, clothing and movement patterns amongst other things to be co-opted to achieve a similar result. To achieve real change, we need both tactical and systemic approaches for creating impact. A society and environment that does not actively address the disconnect and fear of the public realm only serves to normalize and perpetuate it, deepening the cycle. Gendered urban inequality is a result of deep-rooted socio-cultural values and/or structural systems of inequality. Tactical measures can help address immediate concerns, but are only truly effective when working hand-in-hand with deeper drivers of change like awareness and education. Image Credit: Safetipin To design a safe experience through a city, designers need to address actual and perceived fear appropriately. To do this, let’s recognize the difference between actual fear and perceived fear, and their dissimilar contributions to the perception of safety. Actual fear is experienced in response to an existing stimulus. Perceived fear is the fear experienced in the anticipation of a preconceived danger. In the context of human safety, perceived fear is the more omnipresent and arguably, the more pervasive of the two. In other words, it’s not just about the crime stats of a particular place, it’s also about how safe it looks and feels and why— exactly what designers care about! “We shape our buildings, thereafter, they shape us” — Winston Churchill This is critical because perceptions of safety fundamentally impact how we choose to engage (or not engage) with the city. These perceptions can breed a deliberate distancing from the public realm — avoiding public places and public transportation, changing commute routes and countless other safety practices. When an individual deliberately disengages from a setting out of fear due to their gender identity, the possibility of the strengthening of the same elements that are perceived unsafe only increases. The examples chosen illustrate distinct design approaches to address women’s urban safety. There are three broad categories (1) Urban design and planning (2) Community Awareness and Art (3) Emergency mobile apps. (1) addresses both perceived and actual fear, (2) and (3) are restricted to primarily perceived fear and actual fear respectively. 01\ Safety Audit (Delhi, India) Image Credit: Women in Informal Employment:Globalizing and Organizing Approach Safety audits are a tool through which women can begin to participate in the shaping of their environment. First developed in the 1980s by the Metropolitan Action Committee on Public Violence against Women and Children (METRAC) in Toronto, Canada. METRAC defines a safety audit as a methodology developed “to evaluate the environment from the standpoint of those who feel vulnerable and to make changes that reduce opportunities for assault.” As part of the UN’s Safe Cities Global Initiative, a women’s safety audit was conducted in four participating cities across the globe including Delhi, India. The process recognizes that a woman’s journey through the city greatly different from a man’s based on regular activities. It involves leading a group of women through routes they regularly traverse while consciously evaluating its spatial qualities through the lens of safety. Some of the issues that are assessed and recorded carefully through the audit questionnaire are sight lines, lighting, maintenance. Another method also involves having the women markup a map with their perceptions of safety and physical factors that contribute to it. The purpose of the audit is not only to create a feeling of control and trust in their physical environment for the women, but also to submit these findings to officials and policy-makers to implement a hard change. Learnings Changes in the physical environment are difficult to bring into effect, and the main issue faced is when no changes are made — leading to feelings of discouragement among the women. An important learning from successful outcomes of safety audits is that changes in the built environment can be effected through public participation provided the methodology of arriving at the proposed changes is systematic and can be sincerely validated. 02\ Plan India (Delhi, India) Street plays and workshops at the Plan India Safer Cities campaign. Source: YouTube Approach Plan India, a non-profit in Delhi, started an initiative called “Safer Cities” in Mongolpuri located in the Greater Delhi area. The project involves one hundred houses or shops that were carefully chosen and marked with a neon painted sign reading “Surakshit Shehar” (translating to “Safety House”). At any time of day or night, should a woman feel threatened or unsafe she is welcome to knock on any of those doors. (Plan India) The houses were chosen either based on members of the organization who live there or the NGO’s ability to personally vouch for the inhabitants of the development. Specific buildings on street corners or near parks where unsafe elements were observed were incorporated to help offset the embedded perception of fear. The inhabitants of the chosen buildings were also trained by the NGO to be aware of how to correctly address any emergency they might face. Image credit: The Better India Impact The program’s success can be measured by the perpetrators’ awareness of this safety mechanism in place. The neon signs visible from the street act as a visual reminder not only to deter unsafe elements but also instilling a sense of security in the women traversing those streets. This project shows how a network of community members and local authorities can be effectively directed to create a safe environs. The significance of this project is that it does not involve changes in the built and physical environment in any way (apart from the painted signs) and works unseen to subvert the unsafe happenings in this area. 03\ Blank Noise Project (Bengaluru, India) Blank Noise, Walk the Night. Source: Rajesh Dangi, Wikipedia (Creative Commons) “Averting eyes, giving way on the sidewalk are more typical of women’s behavior” — Caroline Andrew Local Context Bengaluru is a cosmopolitan city, the capital of the state of Karnataka and has been undergoing rapid urbanization and growth since the IT boom in the 90s. Associated with monikers like “IT Capital” and “Silicon Valley of India”, its residents are typically considered to be more progressive and cosmopolitan than most the country’s urban population. However, this does not come with the assurance of safety. According to the National Crime Records Bureau statistics for 2015, Bengaluru reported the third highest attacks; 718 against women with an intent to outrage their modesty among 53 Indian cities. (Kalkodl) This coupled with the fact that 58.1% of women polled responded that they did not report sexual violence, (particularly comments, sexual jokes, whistling, leering and obscene gestures) and 40.1% did not report touching, brushing or groping (of the breast or buttocks) situates a telling context for projects such as the Blank Noise Project in Bengaluru. (Kalkodl) Approach The Blank Noise Project was started in Bengaluru and has subsequently spread to multiple cities across the country. It primarily addresses issues of street harassment through public art and engagement. The project does this by tackling the need for a change in public perceptions of women and safety in India’s urban cities. (Pal) Issues partially stem from deep rooted socio-cultural origins and are entrenched in modern day society to the extent that topics regarding everyday harassment and mass molestation are normalized by not just the women themselves who are vulnerable to these experiences but even political leaders (Safi). Instead of taking direct action to address these issues, typical responses range from questioning the moral character of the woman/en in question to suggesting that some victims invite attacks due to their behavior. (Bhalla) Considering this context, projects like Blank Noise working bottom up, seek to create platforms for participatory actions and public debate to challenge existing norms. Furthermore, recognizing that the perceived fear of sexual offence has become ingrained in the city’s female population, their initiatives also seek to empower women to break through their personal, limiting ideas of accessing the city. (Pal) Primary modes of public engagement involve urban art projects, marches, and pop-up events. In addition to these, a permanent network of “Action Heroes” and a database of testimonials also instills a strong identity within the participants of this movement. The issue that the Blank Noise Project appears to face primarily lies in the depth of its reach. At present, both its audience and participants belong to a limited demographic — upper middle class, educated women — both students and young professionals. Though this is not deliberate and some of their organized events (Eg. Talk to Me) have actively engaged different classes and members of society, it would be useful to break socio-economic barriers in a more permanent way. “Talk to Me” Source: Blank Noise Project Learnings The Blank Noise Project shows us the necessity for a platform to break the cycle of “unconscious bias” that plays a big role in perceived fear. An unconscious bias or implicit bias is a social stereotype that is formed outside of one’s conscious awareness. (University of California) One of its initiatives in Bangalore, “Talk to Me,” placed the women feeling threatened literally across a table from those they felt threatened by. This initiative facilitated an enlightening dialogue in a healthy, mediated environment between the two groups. Allowing both parties the opportunity to interact and understand each other has led to better relations at present and has had a direct impact on the feeling of unsafety experienced at that location. Learn more at: http://www.blanknoise.org/home Case 04\ Safetipin Mobile App Safety Audit feature on the Safetipin app (Image credit: Safetipin) Approach With the penetration of mobile phones into India’s urban market, the idea of emergency apps for cell phones holds potential in its ability to engage a broad audience regardless of age, gender, class or religion. Emergency mobile apps aim at providing immediate assistance to women in need. This assistance is multifold typically aimed at pre-incident user defined emergency contacts and/or the authorities. (Ravelo) Safetipin was a mobile app developed by Jagori (the same NGO that conducted the safety audits in Delhi) to address issues of safety in the city when moving to a new place. Apart from GPS tracking that can help ensure the user’s safety, users themselves can determine safety scores for different locations based on their inputs on factors such as lighting, openness, people density, security and feeling. The safety score is calculated and mapped with the analysis being visible for users to utilize, along with data such as closest ATM, pharmacies etc. Users can also post information on the condition of infrastructure such as broken street lights or malfunctioning traffic lights. Safetipin is further unique because it has collaborated with Uber (who after a recent rape case perpetrated by an Uber driver, has installed external cameras on cars) who sends its photographs of different parts of the city so that Safetipin can effectively incorporate this information for its users. Learnings Safetipin has many useful features, however since many them are crowd-sourced it is reliant on the public being participatory. The organizers have circumvented this in a limited way by having volunteers kick start the data collection which has encouraged more users to start participating. Learn more at: https://safetipin.com In Conclusion… What makes your movement through the world feel safe to you? What doesn’t? Put yourself in someone different’s shoes — consider gender identity, socio-cultural background, linguistic abilities, economic stability — all of these issues are intersectional. How does this change your sense of security and safety? How would this impact the way you move through the world and the choices you make? Let’s get designing! … REFERENCES Bhalla, N. (2014, February 6). Using rape as an excuse for moral policing in India. Thomson Reuters Foundation News. England. Retrieved from http://news.trust.org/item/20140206103830-xwq7d/ Facts and figures: Ending violence against women. (n.d.). Retrieved April 18, 2017, from http://www.unwomen.org/en/what-we-do/ending-violence-against-women/facts-and-figures Kalkodl, R. (2016, September 17). NCRB: Bengaluru 3rd among cities in molestation cases — Times of India. The Times of India. Retrieved from http://timesofindia.indiatimes.com/city/bengaluru/NCRB-Bengaluru-3rd-among-cities-in-molestation-cases/articleshow/54373165.cms Pal, S. (2016, September 6). Reclaiming the Streets: Bengaluru’s Blank Noise Project Is Encouraging Women to Fight Fear. Retrieved from http://www.thebetterindia.com/67337/blank-noise-jasmeen-patheja-bengaluru-women-india/ Plan India. (2016). Because I am a Girl — The State of the Girl Child in India 2016. Retrieved from https://www.planindia.org/publication Ravelo, J. L. (2017, March 15). How mobile apps are making cities safer for women. Retrieved from https://www.devex.com/news/sponsored/how-mobile-apps-are-making-cities-safer-for-women-89768 University of California, San Francisco. (n.d.). Unconscious Bias | diversity.ucsf.edu. Retrieved April 19, 2017, from https://diversity.ucsf.edu/resources/unconscious-bias
https://uxplanet.org/un-gendering-the-city-how-can-designers-design-an-infrastructure-of-safety-b54f5955b070
[]
2021-01-08 19:59:59.095000+00:00
['Gender Equality', 'UX Design', 'Cities', 'Urban Planning', 'Technology']
927
The Age of the Algorithm
(Published initially on August 26th, 2016) “Technology” is a funny word, extremely malleable: we tend to recognize as technology only the most recent artifacts of human progress. In a more than purely metaphorical sense, technology is what appears in the world after we graduate. We call the likes of Google, Facebook and Amazon technology-based firms because the last great economic and social revolution hinged on the internet, but almost two centuries ago trains were all the novelty, and the technology firms of these age made huge fortunes covering the rolling plains of the Far West with a network of railroads among indian fights and bandit robberies, immortalized in John Wayne’s movies. Boilers over wheels replacing horses and carriages. Startups are technological because that allows to compete with the establishment. What technology provides to a startup is, mainly, scalability. This is relevant not only since startups routinely lack the deep pockets of a fully grown business and must achieve a lot with very limited resources, but also because frequently a startup’s winning strategy relies on offering a superior product at an inferior price, yet marginally profitable, and quickly obtain the economies of scale that effectively create a de-facto monopoly. There are many ways to compete, but really few to win. The best way I ever heard to summarize it is in the candid words of Jack Welch, CEO of General Electric and professor at MIT: “do it first, do it cheaper, do it better or go home”. We have a classic example in Google: Sergei Brin and Larry Page created the company around PageRank, their thesis’ algorithm to automatically classify a website’s relevance and thus facilitate internet search. Infinitely more scalable that the technology used by the incumbents of the time: rooms full of clerks constantly browsing and manually classifying the information. Interestingly in 1998 Google offered to sell their technology for 1 million dollars because the founders preferred to continue studies at Stanford, but both AltaVista and Yahoo declined the outrageous offer. Yahoo had yet another opportunity to acquire Google in 2002, but the agreement fell apart after weeks of hard negotiations because Terry Semel, then Yahoo’s CEO, faltered at the clearly exaggerated 5 billion dollar price tag. As I write these lines at the beginning of September, 2016, Google and its parent organization Alphabet have a market capitalization in excess of 500 billion dollars. This is a humongous level of value creation. Meanwhile, in July this same year it was announced that Yahoo reached an agreement to sell its internet properties to Verizon for 4.83 billion dollars. It is worth remembering that in 2008 Microsoft bid 44.6 billion dollars for all of Yahoo’s businesses. That, dear readers, is value destruction. An algorithm defeated an entire industry, and this fact has been often repeated since distributed computation is at the spearhead of technology. AirBnb, Uber y Taskrabbit are today’s reference companies worldwide, followed by a myriad of startups pursuing more specific market opportunities in sharing economy, crowdsourcing and crowdfunding. Through digital marketplaces that automatize the exchange of services among individuals, again technology allows reaching that scalability that makes possible the disruption of an established industry. Then, what does future hold for us? We already know that predictions are only reliable when performed backwards, but it is likely that the next great technological revolution will come from the world of artificial intelligence (AI), the set of methodologies that strive to replicate human cognitive processes in machines. Here we have an interesting blind spot: as we mentioned above technology is all that’s new in the world, but AI is veiled with the magical halo with which we tend to dress everything that surrounds matters of the human mind. Rodney Brooks, professor of AI at MIT, refers to it as the AI effect: “When we finally manage to understand a piece we say ‘Oh! That is just computation’ and we leave as AI only the unsolved problems”. We should be aware that elements of AI have been for years “infiltrating” our daily experience, often in rather subtle ways: the simple ability to automatically tag friends in pictures, the kind interactivity of a virtual assistant, the cunning behavior of an enemy inside of one of our favorite computer games. Now both technical advances in fields like deep learning and the availability of massive amounts of data with varying levels of structure (what is usually known as big data) will allow us to perform automated analysis in areas of human activity that just a few years earlier required a high level of domain-specific knowledge and were thus confined to the hardly scalable niches of the expert and the consulting boutique. An example that is close home: smart algorithms endow us at Ágora EAFI to automate the tedious task of analyzing and classifying fundamental company information not about a single firm, but thousands of them, unearthing relevant patterns without the need for hiring rooms full of technical analysts, the associated costs and headaches. Another example that I find really interesting is Descartes Labs, a startup based in USA that employs machine learning algorithms applied over pictures taken via low orbit satellites in order to more precisely predict crop yields. While the method employed by the department of agriculture (USDA), consisting of asking the farmers around the country, produces one estimate each month, Descartes Labs updates its own each two days. Imagine using that privileged information to speculate in the market with futures. Big deal. And beyond? If the advances in biotechnology are up to our expectations, then maybe the algorithm will become flesh. But please stop me here before my philosophical knack runs wild and I bore you even more. Cheers and talk again soon!
https://medium.com/algonaut/the-age-of-the-algorithm-60f9b3125991
['Isaac De La Peña']
2019-06-24 10:02:29.776000+00:00
['Startups', 'Artificial Intelligence', 'Technology', 'Innovation', 'AI']
928
Soccer Laduma App community show their support while being rewarded for the UGC.
Soccer Laduma App community show their support while being rewarded for the UGC. JET8 Follow Jan 23 · 2 min read The year has kicked off to a great start with Soccer Laduma App users taking full advantage of the rewards received for their UGC. The Soccer Laduma App which launched in May 2019, is built on the JET8 social commerce technology, which places users back in the center of the value chain. Within the past few months, the Soccer Laduma App has seen a phenomenal spike in new users. Individuals that are actively posting and engaging on the App, have experienced the full benefits of JET8 Technology through social commerce. Currently, active users on the App are using their social currency, JETPoints, to predominantly redeem data and airtime. The process itself is fast and easy, with the voucher number being received instantly after the process has been completed in the JET8 Wallet. Participants of the exclusive Soccer Laduma Fan Club have taken to the App and are posting regularly as they attend exciting soccer games in South Africa. Users on the App are showing their support through engaging on these posts and posting their exciting soccer-focused content, showing us that true fans of Soccer Laduma are being rewarded. With blockchain and social commerce being a hot topic all over the world, it is exciting to know that one of South Africa’s leading publications have already adapted to the way of the future. Continue to show your support by downloading the App and engaging to experience the full user centric JET8 Technology cycle. Get the JET8 Wallet App If you don’t have the app installed yet get it now at https://wallet.jet8.io or discover new DENapps at https://apps.jet8.io For media inquiries, please contact: [email protected], PR and Social Media Manager
https://medium.com/jet8-token/soccer-laduma-app-community-show-their-support-while-being-rewarded-for-the-ugc-811b757d5ffc
[]
2020-01-23 18:00:51.155000+00:00
['Social Commerce', 'Weekly Update', 'Innovation', 'Blockchain Technology', 'Press Release']
929
April Fools’ 2019: Perception-driven data visualization
April Fools’ 2019: Perception-driven data visualization Exploring OKCupid data with the most powerful psychological technique for accelerating analytics This article was a prank for April Fools’ Day 2019. Now that the festivities are over, scroll to the end of the article for the Real Lessons section for a minute of genuine learning. Evolution endowed humans with a few extraordinary abilities, from walking upright to operating heavy machinery to hyperefficient online mate selection. Humans have evolved the ability to process faces quickly, and you can use perception-driven technique to accelerate your analytics. One of the most impressive is our ability to perceive tiny changes in facial structure and expression, so data scientists have started exploiting our innate superpowers for faster and more powerful data analytics. Evolution-driven data analysis Get ready to be blown away by an incredible new analytics technique! Chernoff Faces are remarkable for the elegance and clarity with which they convey information by taking advantage of what humans are best at: facial recognition. The core idea behind Chernoff faces is that every facial feature will map to an attribute of the data. Bigger ears will mean something, as will smiling, eye size, nose shape, and the rest. I hope you’re excited to see it in action! Let’s walk through a real-life mate selection example with OKCupid data. Data processing I started by downloading a dataset of nearly 60K leaked OKCupid profiles, available here for you to follow along. Real-world data are usually messy and require quite a lot of preprocessing before they’re useful to your data science objectives, and that’s certainly true of these. For example, they come with reams of earnest and 100% reliable self-intro essays, so I did a bit of quick filtering to boil my dataset down to something relevant to me. I used R and the function I found most useful was grepl(). First, since I live in NYC, I filtered out all but the 17 profiles based near me. Next, I cleaned the data to show me the characteristics I’m most fussy about. For example, I’m an Aquarius and getting along astrologically is obviously important, as is a love of cats and a willingness to have soulful conversations in C++. After the first preprocessing steps, here’s what my dataset looks like: The next step is to convert the strings into numbers so that the Chernoff face code will run properly. This is what I’ll be submitting into the faces() function from R’s aplpack package: Next step, the magic! Faces revealed Now that our dataset is ready, let’s run our Chernoff faces visualization! Taa-daa! Below is a handy guide on how to read it. Isn’t it amazingly elegant and so quick to see exactly what is going on? For example, the largest faces are the tallest and oldest people, while the smilers can sing me sweet C++ sonnets. It’s so easy to see all that in a heartbeat. The human brain is incredible! Data privacy issues Unfortunately, by cognitively machine deep learning all these faces, we are violating the privacy of OKCupid users. If you look carefully and remember the visualizations, you might be able to pick them out of a crowd. Watch out for that! Make sure you re-anonymize your results by rerunning the code on an unrelated dataset before presenting these powerful images to your boss. Dates and dating Chernoff faces?! You really should check publication dates, especially when they’re at the very beginning of April. I hope you started getting suspicious when this diehard statistician mentioned astrology and were sure by the time I got to the drivel about de-anonymization. Much love from me and whichever prankster forwarded this to you. ❤ Real lessons I’ve always been amused by Chernoff faces (and eager for an excuse to share some of my favorite analytics trivia with you), though I’ve never actually seen them making themselves useful in the wild. Even though the article was intended for a laugh, there are a few real lessons to take away: Expect to spend time cleaning data. While the final visualization took only a couple of keystrokes to achieve, the bulk of my effort was preparing the dataset to use, and you should expect this in your own data science adventures too. While the final visualization took only a couple of keystrokes to achieve, the bulk of my effort was preparing the dataset to use, and you should expect this in your own data science adventures too. Data visualization is more than just histograms . There’s a lot of room for creativity when it comes to how you can present your data, though not everything will be implemented in a package that’s easy for beginners to use. While you can get Chernoff faces through R with just the single function faces(data), the sky is the limit if you’re feeling creative and willing to put the graphics effort in. You might need something like C++ if you’re after the deepest self-expression. There’s a lot of room for creativity when it comes to how you can present your data, though not everything will be implemented in a package that’s easy for beginners to use. While you can get Chernoff faces through R with just the single function faces(data), the sky is the limit if you’re feeling creative and willing to put the graphics effort in. You might need something like C++ if you’re after the deepest self-expression. What’s relevant to me might not be relevant to you. I might care about cat-love, you might care about something else. An analysis is only useful for its intended purpose, so be careful if you’re inheriting a dataset or report made by someone else. It might be useless to you, or worse, misleading. I might care about cat-love, you might care about something else. An analysis is only useful for its intended purpose, so be careful if you’re inheriting a dataset or report made by someone else. It might be useless to you, or worse, misleading. There’s no right way to present data , but one way to think about viz quality is speed-to-understanding. The faces just weren’t efficient at getting the information into your brain — you probably had to go and consult the table to figure out what you’re looking at. That’s something you want to avoid when you’re doing analytics for realsies. , but one way to think about viz quality is speed-to-understanding. The faces just weren’t efficient at getting the information into your brain — you probably had to go and consult the table to figure out what you’re looking at. That’s something you want to avoid when you’re doing analytics for realsies. Chernoff faces sounded brilliant when they were invented, the same way that “cognitive” this-and-that sounds brilliant today. Not everything that tickles the poet in you is a good idea… and stay extra vigilant for leaps of logic when the argument appeals to evolution and the human brain. Don’t forget to test mathemagical things before you deploy them in your business. If you want to have a go at creating these faces yourself, here’s a tutorial. If you prefer to read one of my straight-faced articles about data visualization instead, try this one.
https://towardsdatascience.com/perception-driven-data-visualization-e1d0f13908d5
['Cassie Kozyrkov']
2019-04-02 13:29:17.253000+00:00
['Analytics', 'Data Science', 'Technology', 'Visualization', 'Artificial Intelligence']
930
Can’t Sleep? This Tech Could Put You in Sync with the Sun
Can’t Sleep? This Tech Could Put You in Sync with the Sun A new device claims to go beyond sleep tracking to reset your circadian rhythm. I’m dreaming too much. That’s what Fares Siddiqui, cofounder of the company Circadia, tells me after its sleep tracker spends several nights perched at my bedside. When I first saw the long stretches of REM sleep — the stage of sleep when dreaming happens — in my data, I romanticized the results. I dream big, I thought. But Siddiqui says the pattern is a result of being either sleep-deprived or anxious. Oh. Circadia is a startup focused on circadian rhythms, and the promise that if you can understand and control your daily patterns, you’ll sleep better: “a sleep lab on your bedside table,” pledges its marketing material. Most sleep trackers — devices on your wrist, on your mattress, or at your bedside — track your tossing and turning along with functions like your heart rate to tell you how much and how well you’ve slept. But typically, they don’t tell you what to do about it. Siddiqui’s company, funded by healthy Kickstarter and Indiegogo campaigns, is developing a connected tracker, lamp, and app. It aims to set itself apart from the current wave of sleep trackers by offering both information on your own personal rhythm, and customized advice. “We want to tell you what time it is inside of your body,” Siddiqui says. He became passionate about the topic after dealing with his own insomnia and learning about NASA’s light experiments to help astronauts’ circadian rhythms. Your circadian rhythm, your body’s natural 24-hour cycle, affects everything from sleep and jet lag to hormones and how well your drugs work. But different people’s internal clocks may run a little ahead or behind — maybe it’s midnight in your body when the clock says it’s only 10 pm. For me, I’m hoping some circadian insight can help me feel more refreshed in the mornings. Other circadian-curious people might need to adjust to jet lag or shift work, or identify bad bedtime habits that are keeping them awake. To learn about my own circadian biology, I let a premarket version of Circadia’s tracking device watch me sleep, I breakfasted by the glow of its lamp, and I gave personal details to its sleep-coaching app. I got an intriguing glimpse into the functioning of my body. But when it came to understanding the significance of my personal patterns, I was mostly left in the dark. Surfing the wavelengths In December, if all goes as planned, you’ll be able to buy Circadia’s $129 sleep tracker, which will be integrated with its lamp and app. For now, a rudimentary version of the app is free, and the lamp sells as a standalone for $79 — but it is a very handsome lamp, a sleek cylinder of blue light that morphs to red when you flip it over. Those wavelengths are intended to reinforce my 24-hour rhythm, helping me sleep at night and be more alert during the day. When I lie down in bed, I’m supposed to leave that dim red light on for half an hour (even though my eyes are closed) to help myself fall asleep. The instructions also say 30 minutes of blue light in the morning will alleviate grogginess, so I eat a few breakfasts with the lamp lighting up my raisin bran. Groggy mornings? Flip the lamp for blue light, which may help you wake up. Courtesy of Circadia I can’t tell if it makes me feel more awake, but according to Sabra Abbott, a neurologist at Northwestern University Feinberg School of Medicine, the ability of blue light to promote wakefulness and adjust our body clocks is well established. Blue wavelengths in sunlight naturally help our brains calibrate our clocks by preventing production of melatonin, a hormone that makes us sleepy. That’s why experts tell us not to stare at our phones, which emit blue light, at night in bed. Abbott uses blue-wavelength light therapy and melatonin to treat patients with circadian disorders, who might naturally fall asleep very late or wake long before dawn. Light therapy is powerful enough that those with circadian disorders should be cautious with its timing, she adds. Someone whose clock is so shifted that they only fall asleep near dawn, for example, could make things worse by using blue light in the morning. “We want to tell you what time it is inside of your body,” Siddiqui says. That said, she doesn’t know of any reason to turn on a red lamp while you’re falling asleep. “It’s not so much the presence of red light that’s helpful, but the absence of blue light,” she says. Siddiqui says red light prevents melatonin suppression, which is true — but it’s no better than being in the dark. If you really wanted to take advantage of red light (and didn’t mind the creepiness factor), you could do all your evening activities by red light only. But Circadia’s red light, by design, is too dim for that. I got yellow-zoned The second part of Circadia’s setup is a sleep tracker, an elegantly designed hand-sized disc that snaps magnetically onto a stand. Siddiqui says it scans my body with radar looking for tiny movements to infer my heart rate and breathing. From that, it figures out which parts of the night I spent in wakefulness, light sleep, deep sleep, or REM sleep. In the company’s own comparison testing, Circadia outperformed wearable devices like the Fitbit. I follow directions to set it roughly an arm’s length away from my bed and aim it at my torso. After some fussing on my phone to connect the tracker and app over wifi (so much for avoiding blue light), I hit “start” and lie down. A radar-based device tracks heart rate and breathing. Courtesy of Circadia The first night, I feel self-conscious with the tracker staring at me. In the morning the app — a beta version that isn’t yet publicly available — says it took me 43 minutes to fall asleep. Even after I get used to the tracker, the app seems to chastise me every morning, displaying a circle about half-filled with yellow and a middling “sleep index” score. The app also shows a timeline of my night that seems generally correct: It takes me a while to fall asleep. I sleep deeply at first, then shift into REM sleep in the early morning. As my husband gets ready for work, I alternate between dreaming and dozing. Siddiqui says that later versions of the app will tell users how their circadian clocks align with the outside world. It will deliver personal recommendations for using the lamp and for changes to habits and sleep environments, so that people can recalibrate their internal clocks, sleep better, or combat jet lag. In the spring, users will also be able to sign up for advice from a human sleep coach or therapist. For now, my only feedback comes from Siddiqui, who notices me waking up often. He also tells me that while an average person spends about a quarter of the night in REM sleep, for me it topped 40 percent on some nights, and 57 percent on one especially dreamy night. My body may be trying to catch up on missed rest by sacrificing deep and light sleep for extra REM. But Abbott doesn’t think I should read too much into my results. Sleep tracking is an imperfect way to tell the time on someone’s internal clock; the best way is to measure melatonin production. In its most recent lab tests, Circadia was about 67 percent accurate at telling what sleep stage a person was in. So far, those lab tests have included only a small number of young, healthy males — not anyone with an actual sleep disorder. Besides, people spend different amounts of time in certain sleep stages for many reasons, including normal variation and drug side effects — antidepressants reduce REM, for example. Circadia claims 1 in 3 people have rhythms that are out of sync, but Abbott says this is hard to know. Everyone falls somewhere on a spectrum from early bird to night owl, she says, which isn’t a disorder unless it interferes with life. But being told by an app that their sleep is abnormal might make people needlessly anxious.
https://medium.com/neodotlife/cant-sleep-this-tech-could-put-you-in-sync-with-the-sun-d8125c4a0a3c
['Elizabeth Preston']
2020-07-13 20:35:40.711000+00:00
['Wellness', 'Self Improvement', 'Technology', 'Sleep', 'Health']
931
Six Wrong Predictions Reported By the New York Times
The New York Times is one of the prominent American daily newspapers with millions of readers in the US and across the globe. During his tenure as the President of the United States, Donald Trump attacked the New York Times and other media outlets, consistently labeling them “fake news.” In contrast to his remarks, the New York Times has won 130 Pulitzer Prizes — more than any other newspaper. Established in 1851, it has been an influential newspaper in the US and around the world for decades. It’s known as a national “newspaper of record,” based on the Encyclopedia Britannica. While acknowledging the New York Times’ reputation and credibility, I shed light upon six predictions reported by this newspaper that are untrue now. These predictions were made about Airplanes & Flying, Laptops, Apple & iPhone, Twitter, Television, and Automobiles. 1. On Flying: We won’t be able to fly in millions of years On October 09, 1903, the New York Times published a piece about the future of flying titled “THE FLYING MACHINES THAT DO NOT FLY,” which stated: “… it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years — provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials.” On December 17, 1903, the Wright brothers flew their first airplane. A decade ago, how many of us thought that producing flying cars might be a myth? Now, we not only have flying cars but also flying Gravity Jets, or let’s say flying humans. What’s next? Photo by Natali Quijano on Unsplash 2. On Laptop computers: No one would be interested in a Laptop A New York Times article from 1985 discussed that few people would be interested in carrying a personal computer and laptop. The article titled “THE EXECUTIVE COMPUTER” states: “On the whole, people don’t want to lug a computer with them to the beach or on a train to while away hours they would rather spend reading the sports or business section of the newspaper… the real future of the laptop computer will remain in the specialized niche markets. Because no matter how inexpensive the machines become, and no matter how sophisticated their software, I still can’t imagine the average user taking one along when going fishing.” As of February 2019, over 70% of the US households had either a laptop or a desktop computer at home. Laptop computers continue to shrink in size but become more powerful in terms of capacity. “The first floppy disk, introduced in 1971, had a capacity of 79.7 kB” Now, even a Notepad file is larger than 80 kilobytes. Are kilobytes still relevant? Don’t you take your laptop along when going fishing? Forget about notebooks; our mobile phones, tablets, and other similar devices are more accessible now — they’re more portable, personal, and closer to our hearts and EYES. Photo by Campaign Creators on Unsplash 3. On Apple and iPhones: They will never succeed | They will never have a phone either Apple was established in 1976. Two decades later, the New York Times wrote that Apple would fail, quoting a Forrester Research analyst: “Whether they stand alone or are acquired, Apple as we know it is cooked. It’s so classic. It’s so sad.” A decade later, in 2006, another article reported that Apple might never produce a cell phone: “Everyone’s always asking me when Apple will come out with a cell phone. My answer is, ‘Probably never.’” David Pogue, The New York Times. Apple released its iPhone in 2007. Since then, 2.2 billion iPhones have been sold. And Apple? It’s the most valuable brand in the world as of 2020. Photo by Neil Soni on Unsplash 4. On Twitter: Only the illiterate might use it A New York Times article discussed the emergence of Twitter in which a reference to Bruce Sterling’s earlier remarks on Twitter was also made. He is The New York Times’ best-selling science-fiction writer and journalist. In 2006, he was with the idea that Twitter would not be prominent amongst the intellectuals, but only the illiterate might use it: “Using Twitter for literate communication is about as likely as firing up a CB radio and hearing some guy recite ‘The Iliad.’” — Bruce Sterling, The New York Times. President Donald Trump has tweeted over 17000 tweets only through the first two-and-a-half years of his presidency — and the most literate people have retweeted it. As of 2018, Twitter has had over “321 million monthly active users.” Many politicians, celebrities, intellectuals, and other highbrows use it too. Unlike the prediction made in 2007, it’s only the illiterate that cannot use Twitter, as it’s hard to say so many things in a few characters. Photo by MORAN on Unsplash 5. On Television: It will not be a competitor of broadcasting, and people will not have time to watch it In 1939, a New York Times article suggested that people won’t have time to watch television. For this reason, it cannot compete with other forms of media such as newspapers and radio. The article narrates: “The problem with television is that the people must sit and keep their eyes glued on a screen; the average American family hasn’t time for it.” Based on a 2019 estimate, “307.3 million people ages 2 and older live in US TV households.” Now, you can watch the TV even in the toilet. You don’t need to carry your TV, but your phone or tablet. But there is one thing: Newspapers are not gone. Over 69% of the US population still read newspapers. Based on Forbes, print remains the most common medium, with 81 percent reading this format. According to studies, 2.5 billion people read print newspapers daily. What happened to the TV industry? As of 2015, “An estimated 1.57 billion households around the world owned at least one TV set.” Since more people typically live in a household, billions of people watch TV every day — more than those who read newspapers. Photo by Dave Weatherall on Unsplash 6. On High-Speed Automobiles: We won’t be able to drive over 80 miles per hour Reporting on the dangers of high-speed driving, a New York Times article suggested that our brains cannot guide a car with any speed over 80 miles per hour. It reported a debate between two experts that took place in Paris in 1904. The article says: “It remains to be proved how fast the brain is capable of traveling […] If it cannot acquire an eight-mile per hour speed, then an auto running at the rate of 80 miles per hour is running without the guidance of the brain, and the many disastrous results are not to be marveled at.” In 1894, Benz Velo had 12 mph (20 km/h). In 1904, it was claimed that no speed over 80 mph is plausible. In 2017, Koenigsegg Agera RS was produced with a production speed of 277.87 mph (447.19 km/h). In Germany, there are no speed limits on most highways.
https://medium.com/swlh/six-wrong-predictions-reported-by-the-new-york-times-252c0f4b8e32
['Massùod Hemmat']
2020-12-15 19:03:49.558000+00:00
['The New York Times', 'Predictions', 'Journalism', 'Technology', 'Politics']
932
Why are Stablecoins Important in the Lending Business?
The last few days stablecoins have raised a center of attention in the crypto world. On Wednesday, October 24th, Tether burned 500 million USDT stablecoin tokens. The past few weeks have seen massive Tether transactions, particularly after USDT lost parity with the U.S. dollar amid questions about Tether’s access to banking services. Besides Tether, other stablecoins have also experienced unusual behavior. These events led thousands of people around the world start questioning the need for investment on stablecoin and its reliance for intermediary purposes. Now, more than ever, since cryptocurrencies are becoming more stable, investors start to wonder whether stablecoins are imperative. Strong changes in the market cap can either boost the market with unique investment opportunities or create a temporary recession by driving away investment and devaluing coin and token prices. Therefore, in an unpredictable environment, like the crypto world, stablecoins are pretty important as their role is to bring stability to an unpredictable and highly volatile market, creating an investment space for stable deposits and predictable loans and transactions. Any institution offering lending, wealth management or intermediary services — such as LOTS — should consider the advantages that stablecoins bring to their fintech environment, while being aware of certain qualities that are therefore needed. Given that value of stablecoins are based on the trust given by their community, they can be pegged to fiat currencies, or exchange traded commodities. It can be backed by another cryptocurrency and linked to a decentralized autonomous organization which controls issuance and pricing — therefore offering real and stable value to its token. Stablecoins play an intermediary role for non-volatile transactions while keeping its nominal value parallel to its real value. Lending institutions for digital assets could consider using stablecoins as an alternative token to provide their clients more stability with their financial transactions. A loan agreed on a highly volatile currency might seriously jeopardize both lenders and borrowers. For example, if a 1 BTC loan would have been borrowed exactly one year ago ($6,000 approx.) and paid back last December ($19,000 approx), the borrower would have paid over three times the amount he borrowed in real value two months after (in the hypothetical case it was a two-month loan). Now, let’s say that same 1 BTC loan was given last December and paid today ($6,300), the lender would have lost over $10k. Stablecoins offer the possibility of fair transactions as a token’s nominal value possesses a real value built on their community’s trust and therefore opens possibilities for being used in loans and deposits, as well as more accurate forecasts and investments. It is important for institutions around the world to rely on the opportunities that stablecoins provide. Stablecoins can offer several advantages as fiat currency does in the traditional financial world, and it can boost the use and trust in cryptocurrencies around the world. Brian Armstrong Coinbase Daniel Jeffries Noam Levenson Ben Yu Taylor Pearson
https://medium.com/lots-epcot/why-are-stablecoins-important-3476243e01c1
['Eric Koechlin']
2018-11-05 00:52:27.987000+00:00
['Blockchain', 'Ethereum', 'Technology', 'Bitcoin', 'Cryptocurrency']
933
The Hardest Coding Interview Questions Ever
The Hardest Coding Interview Questions Ever I looked at over 1000 coding interview questions and found the hardest questions as follows by tech giants. Question 01 There exists a staircase with N steps, and you can climb up either 1 or 2 steps at a time. Given N, write a function that returns the number of unique ways you can climb the staircase. The order of the steps matters. For example, if N is 4, then there are 5 unique ways: 1, 1, 1, 1 2, 1, 1 1, 2, 1 1, 1, 2 2, 2 What if, instead of being able to climb 1 or 2 steps at a time, you could climb any number from a set of positive integers X? For example, if X = {1, 3, 5}, you could climb 1, 3, or 5 steps at a time. Question 02 Given an integer k and a string s, find the length of the longest substring that contains at most k distinct characters. For example, given s = “abcba” and k = 2, the longest substring with k distinct characters is “bcb”. Question 03 Given a list of integers, write a function that returns the largest sum of non-adjacent numbers. Numbers can be 0 or negative. For example, [2, 4, 6, 2, 5] should return 13 , since we pick 2 , 6 , and 5 . [5, 1, 1, 5] should return 10 , since we pick 5 and 5 . Follow-up: Can you do this in O(N) time and constant space? Question 04 Given an array of integers, return a new array such that each element at index i of the new array is the product of all the numbers in the original array except the one at i . For example, if our input was [1, 2, 3, 4, 5] , the expected output would be [120, 60, 40, 30, 24] . If our input was [3, 2, 1] , the expected output would be [2, 3, 6] . Follow-up: what if you can’t use division? Question 05 Given an array of integers, find the first missing positive integer in linear time and constant space. In other words, find the lowest positive integer that does not exist in the array. The array can contain duplicates and negative numbers as well. For example, the input [3, 4, -1, 1] should give 2 . The input [1, 2, 0] should give 3 . You can modify the input array in-place. Question 06 Given an array of numbers, find the length of the longest increasing subsequence in the array. The subsequence does not necessarily have to be contiguous. For example, given the array [0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15], the longest increasing subsequence has length 6: it is 0, 2, 6, 9, 11, 15. Question 07 We’re given a hashmap associating each courseId key with a list of courseIds values, which represents that the prerequisites of courseId are courseIds . Return a sorted ordering of courses such that we can finish all courses. Return null if there is no such ordering. For example, given {‘CSC300’: [‘CSC100’, ‘CSC200’], ‘CSC200’: [‘CSC100’], ‘CSC100’: []}, should return [‘CSC100’, ‘CSC200’, ‘CSCS300’]. Question 08 A quack is a data structure combining properties of both stacks and queues. It can be viewed as a list of elements written left to right such that three operations are possible: push(x) : add a new item x to the left end of the list : add a new item to the left end of the list pop() : remove and return the item on the left end of the list : remove and return the item on the left end of the list pull() : remove the item on the right end of the list. Implement a quack using three stacks and O(1) additional memory, so that the amortized time for any push, pop, or pull operation is O(1) . Question 09 Given an array of numbers of length N , find both the minimum and maximum using less than 2 * (N - 2) comparisons. Question 10 Given a word W and a string S , find all starting indices in S which are anagrams of W . For example, given that W is “ab”, and S is “abxaba”, return 0, 3, and 4. Question 11 Given an array of integers and a number k, where 1 <= k <= length of the array, compute the maximum values of each subarray of length k. For example, given array = [10, 5, 2, 7, 8, 7] and k = 3, we should get: [10, 7, 8, 8], since: 10 = max(10, 5, 2) 7 = max(5, 2, 7) 8 = max(2, 7, 8) 8 = max(7, 8, 7) Do this in O(n) time and O(k) space. You can modify the input array in-place and you do not need to store the results. You can simply print them out as you compute them. Question 12 Find an efficient algorithm to find the smallest distance (measured in number of words) between any two given words in a string. For example, given words “hello”, and “world” and a text content of “dog cat hello cat dog dog hello cat world”, return 1 because there’s only one word “cat” in between the two words. Question 13 Given a linked list, uniformly shuffle the nodes. What if we want to prioritize space over time? Question 14 Typically, an implementation of in-order traversal of a binary tree has O(h) space complexity, where h is the height of the tree. Write a program to compute the in-order traversal of a binary tree using O(1) space. Question 15 Given a string, find the longest palindromic contiguous substring. If there are more than one with the maximum length, return any one. For example, the longest palindromic substring of “aabcdcb” is “bcdcb”. The longest palindromic substring of “bananas” is “anana”. Question 16 Implement an LFU (Least Frequently Used) cache. It should be able to be initialized with a cache size n , and contain the following methods: set(key, value) : sets key to value . If there are already n items in the cache and we are adding a new item, then it should also remove the least frequently used item. If there is a tie, then the least recently used key should be removed. : sets to . If there are already items in the cache and we are adding a new item, then it should also remove the least frequently used item. If there is a tie, then the least recently used key should be removed. get(key) : gets the value at key . If no such key exists, return null. Each operation should run in O(1) time. Question 17 Given a string consisting of parentheses, single digits, and positive and negative signs, convert the string into a mathematical expression to obtain the answer. Don’t use eval or a similar built-in parser. For example, given ‘-1 + (2 + 3)’, you should return 4 . Question 18 Connect 4 is a game where opponents take turns dropping red or black discs into a 7 x 6 vertically suspended grid. The game ends either when one player creates a line of four consecutive discs of their color (horizontally, vertically, or diagonally), or when there are no more spots left in the grid. Design and implement Connect 4. Question 19 There are N couples sitting in a row of length 2 * N . They are currently ordered randomly, but would like to rearrange themselves so that each couple's partners can sit side by side. What is the minimum number of swaps necessary for this to happen? Question 20 Recall that the minimum spanning tree is the subset of edges of a tree that connect all its vertices with the smallest possible total edge weight. Given an undirected graph with weighted edges, compute the maximum weight spanning tree. Question 21 Sudoku is a puzzle where you’re given a partially-filled 9 by 9 grid with digits. The objective is to fill the grid with the constraint that every row, column, and box (3 by 3 subgrid) must contain all of the digits from 1 to 9. Implement an efficient sudoku solver. Question 22 A knight is placed on a given square on an 8 x 8 chessboard. It is then moved randomly several times, where each move is a standard knight move. If the knight jumps off the board at any point, however, it is not allowed to jump back on. After k moves, what is the probability that the knight remains on the board? Question 23 You are given an array of length 24 , where each element represents the number of new subscribers during the corresponding hour. Implement a data structure that efficiently supports the following: update(hour: int, value: int) : Increment the element at index hour by value . : Increment the element at index by . query(start: int, end: int) : Retrieve the number of subscribers that have signed up between start and end (inclusive). You can assume that all values get cleared at the end of the day, and that you will not be asked for start and end values that wrap around midnight. Question 24 Given an array of integers, find the first missing positive integer in linear time and constant space. In other words, find the lowest positive integer that does not exist in the array. The array can contain duplicates and negative numbers as well. For example, the input [3, 4, -1, 1] should give 2 . The input [1, 2, 0] should give 3 . You can modify the input array in-place. Question 25 Given an array of numbers, find the length of the longest increasing subsequence in the array. The subsequence does not necessarily have to be contiguous. For example, given the array [0, 8, 4, 12, 2, 10, 6, 14, 1, 9, 5, 13, 3, 11, 7, 15], the longest increasing subsequence has length 6: it is 0, 2, 6, 9, 11, 15. Question 26 An 8-puzzle is a game played on a 3 x 3 board of tiles, with the ninth tile missing. The remaining tiles are labeled 1 through 8 but shuffled randomly. Tiles may slide horizontally or vertically into an empty space, but may not be removed from the board. Design a class to represent the board, and find a series of steps to bring the board to the state [[1, 2, 3], [4, 5, 6], [7, 8, None]]. Question 27 Given a string and a set of delimiters, reverse the words in the string while maintaining the relative order of the delimiters. For example, given “hello/world:here”, return “here/world:hello” Follow-up: Does your solution work for the following cases: “hello/world:here/”, “hello//world:here” Question 28 Given a list of points, a central point, and an integer k , find the nearest k points from the central point. For example, given the list of points [(0, 0), (5, 4), (3, 1)] , the central point (1, 2) , and k = 2, return [(0, 0), (3, 1)] . Question 29 Given an array of positive integers, divide the array into two subsets such that the difference between the sum of the subsets is as small as possible. For example, given [5, 10, 15, 20, 25] , return the sets {10, 25} and {5, 15, 20} , which has a difference of 5, which is the smallest possible difference. Question 30 Given a sorted list of integers of length N , determine if an element x is in the list without performing any multiplication, division, or bit-shift operations. Do this in O(log N) time. Question 31 You come across a dictionary of sorted words in a language you’ve never seen before. Write a program that returns the correct order of letters in this language. For example, given ['xww', 'wxyz', 'wxyw', 'ywx', 'ywz'] , you should return ['x', 'z', 'w', 'y'] . Question 32 Describe what happens when you type a URL into your browser and press Enter. Question 33 You are going on a road trip, and would like to create a suitable music playlist. The trip will require N songs, though you only have M songs downloaded, where M < N . A valid playlist should select each song at least once, and guarantee a buffer of B songs between repeats. Given N , M , and B , determine the number of valid playlists. Question 34 Write a program that computes the length of the longest common subsequence of three given strings. For example, given “epidemiologist”, “refrigeration”, and “supercalifragilisticexpialodocious”, it should return 5 , since the longest common subsequence is "eieio". Question 35 Given a list, sort it using this method: reverse(lst, i, j) , which reverses lst from i to j . Question 36 You have an N by N board. Write a function that, given N, returns the number of possible arrangements of the board where N queens can be placed on the board without threatening each other, i.e. no two queens share the same row, column, or diagonal. Question 37 Given an array of strictly the characters ‘R’, ‘G’, and ‘B’, segregate the values of the array so that all the Rs come first, the Gs come second, and the Bs come last. You can only swap elements of the array. Do this in linear time and in-place. For example, given the array [‘G’, ‘B’, ‘R’, ‘R’, ‘B’, ‘R’, ‘G’], it should become [‘R’, ‘R’, ‘R’, ‘G’, ‘G’, ‘B’, ‘B’]. Question 38 Suppose you are given a table of currency exchange rates, represented as a 2D array. Determine whether there is a possible arbitrage: that is, whether there is some sequence of trades you can make, starting with some amount A of any currency, so that you can end up with some amount greater than A of that currency. There are no transaction costs and you can trade fractional quantities. Question 39 Given an array of integers where every integer occurs three times except for one integer, which only occurs once, find and return the non-duplicated integer. For example, given [6, 1, 3, 3, 3, 6, 6], return 1. Given [13, 19, 13, 13], return 19. Do this in O(N) time and O(1) space. Question 40 Given a list of integers S and a target number k, write a function that returns a subset of S that adds up to k. If such a subset cannot be made, then return null. Integers can appear more than once in the list. You may assume all numbers in the list are positive. For example, given S = [12, 1, 61, 5, 9, 2] and k = 24, return [12, 9, 2, 1] since it sums up to 24. Question 41 Given a sorted array arr of distinct integers, return the lowest index i for which arr[i] == i . Return null if there is no such index. For example, given the array [-5, -3, 2, 3] , return 2 since arr[2] == 2 . Even though arr[3] == 3 , we return 2 since it's the lowest index. Question 42 Given an array of numbers arr and a window of size k , print out the median of each window of size k starting from the left and moving right by one position each time. For example, given the following array and k = 3 : [-1, 5, 13, 8, 2, 3, 3, 1] Your function should print out the following: 5 <- median of [-1, 5, 13] 8 <- median of [5, 13, 8] 8 <- median of [13, 8, 2] 3 <- median of [8, 2, 3] 3 <- median of [2, 3, 3] 3 <- median of [3, 3, 1] Recall that the median of an even-sized list is the average of the two middle numbers. Question 43 You are given an array of integers representing coin denominations and a total amount of money. Write a function to compute the fewest number of coins needed to make up that amount. If it is not possible to make that amount, return null. For example, given an array of [1, 5, 10] and an amount 56 , return 7 since we can use 5 dimes, 1 nickel, and 1 penny. Given an array of [5, 8] and an amount 15 , return 3 since we can use 5 5-cent coins. Question 44 Explain the difference between composition and inheritance. In which cases would you use each? Question 45 You are given a 2D matrix of 1s and 0s where 1 represents land and 0 represents water. Grid cells are connected horizontally orvertically (not diagonally). The grid is completely surrounded by water, and there is exactly one island (i.e., one or more connected land cells). An island is a group is cells connected horizontally or vertically, but not diagonally. There is guaranteed to be exactly one island in this grid, and the island doesn’t have water inside that isn’t connected to the water around the island. Each cell has a side length of 1. Determine the perimeter of this island. For example, given the following matrix: [[0, 1, 1, 0], [1, 1, 1, 0], [0, 1, 1, 0], [0, 0, 1, 0]] Return 14 . Question 46 Given a string, return the length of the longest palindromic subsequence in the string. For example, given the following string: MAPTPTMTPA Return 7, since the longest palindromic subsequence in the string is APTMTPA . Recall that a subsequence of a string does not have to be contiguous! Your algorithm should run in O(n²) time and space. Question 47 Given a list of strictly positive integers, partition the list into 3 contiguous partitions which each sum up to the same value. If not possible, return null. For example, given the following list: [3, 5, 8, 0, 8] Return the following 3 partitions: [[3, 5], [8, 0], [8]] Which each add up to 8. Question 48 Given a tree, find the largest tree/subtree that is a BST. Given a tree, return the size of the largest tree/subtree that is a BST. Question 49 A knight’s tour is a sequence of moves by a knight on a chessboard such that all squares are visited once. Given N, write a function to return the number of knight’s tours on an N by N chessboard. Question 50 Implement a file syncing algorithm for two computers over a low-bandwidth network. What if we know the files in the two computers are mostly the same?
https://levelup.gitconnected.com/the-hardest-coding-interview-questions-ever-83e7bac07839
['Anh Dang']
2021-09-13 08:07:33.015000+00:00
['Quiz', 'Algorithms', 'Programming', 'Technology', 'Coding Interviews']
934
Genaro Network (GNX) Monthly Technical Report — November
November Monthly Report The November research and development focus of the Genaro Network team is on improving the performance of distributed storage. Based on it, design distributed network disks and function experiments for distributed collaborative files. Corresponding maintenance and data structure upgrades were carried out in the public chain and mining pool. Public chain layer: * Regular maintenance of the public chain * Verified functional completeness via testing under existing mainstream cross-chain frameworks * Reasonably optimized logs * The framework of heterogeneous cross-chain based on the test chain is completed and the testing process is started * Introduced the design plan for the supervisory layer, refined the process, and started coding Mining pool part: * Modified the block browser to get the transaction list bug * Researched on data structure upgrade, ensure that the new structure compatible with the historical structure * Modified the separation of the revenue account and the betting account to ensure the safety of the bet funds * Modification of the mining pool administrator review and increase margin pages has been completed and successfully passed the test * The modification of the margin page has been completed and passed the test Storage layer: * Refactored development based on rust bridge * Measured the bridge’s ability to accept user messages * Checked the stored message part for actual measurement, and optimized the functional overhead reasonably * Optimized the message flow on the miner side, increased the message capacity of the channel at the same time, and conducted measurement * The message processing logic is designed to reduce unnecessary information and speed up the response time. * Iteration of storage sharding based on physical distance * Designed an atomic operation flow for distributed storage *JS writing text version management process *The bottom layer of slate is integrated into the network, and compilation is in progress Future Outlook In 2019 Genaro’s technology exploration, code submission, R&D and practice never stopped in all the aspects including the public chain, storage and mining pool. We look forward to the excellent technology developers start the trial, join the community and exchange ideas with Genaro team and make valuable suggestions regarding our products. For further development updates, please stay tuned to the official media channels of the Genaro Network project. Breaking News: The first edition of the GSIOP protocol is officially released On February 20, 2019, Singapore time, Genaro Network, the world’s first smart data ecosystem with a Dual-Strata Architecture, integrating a public blockchain with decentralized storage officially released the first version of the GSIOP protocol. This is not only the product of nearly a year hard work of the Genaro entire team of engineers, but it also marks Genaro’s new milestone in the practical application of cross-chain technology. Breaking News: G.A.O. (Genaro Alpha One) is officially launched Genaro Network, the future of Smart Data Ecosystem for DApps, invites you to witness the new era of smart data, empowered by the revolutionary serverless interactive system! Recommended reading: Genaro public network mainnet officially launched | Community Guide Download Technical Yellow Paper Genaro’s latest versions, Genaro Eden and Genaro Eden Sharer, will allow you to store your files in a more secure way and share your unused storage to earn GNX. Get your Genaro Eden/Sharer for Linux, Windows and MAC OS right now from the official website: Git source repository is on GitHub>> Important: Warm reminder to our community members, please download Genaro Eden ONLY from our official website/GitHub and DO NOT trust any referral links and reposts from anyone, otherwise, we won’t be able to guarantee privacy and security of your data and protect you from scammers. Genaro Eden — The first decentralized application on the Genaro Network, providing everyone with a trustworthy Internet and a sharing community: Related Publications: Genaro’s Core Product Concept Genaro Eden: Five Core Features How Does Genaro’s Technology Stand Out? Genaro Eden Application Scenarios and User Experience The Genaro Ecosystem Matthew Roszak Comments on Release of Genaro Eden About Genaro Network The Genaro Network is the first smart data ecosystem with a Dual-Strata Architecture, integrating a public blockchain with decentralized storage. Genaro pioneered the combination of SPoR (Sentinel Proof of Retrievability) with PoS (Proof of Stake) to form a new consensus mechanism, ensuring stronger performance, better security and a more sustainable blockchain infrastructure. Genaro provides developers with a one-stop platform to deploy smart contracts and store the data needed by DAPPs simultaneously. Genaro Network’s mission is to ensure the secure migration of the core Internet infrastructure to the blockchain. Official Telegram Community: https://t.me/GenaroNetworkOfficial Telegram Community (Russian): https://t.me/GenaroNetworkOfficial_Ru
https://medium.com/genaro-network/2019-11-genaro-network-gnx-monthly-technical-report-november-ab7b1e2ee7c9
['Genaro Network', 'Gnx']
2019-11-30 08:50:09.017000+00:00
['Developer', 'Storage', 'Blockchain', 'Innovation', 'Technology']
935
Tesla Started manufacturing its Model Y EVs in Giga Shanghai.
The Giant Automobile manufacturer Tesla started its Model Y Electric Car manufacture in GIGA SHANGHAI. As the company tremendous growth in 2020 both in MARKET CAP. as well as NO.OF.CARS PRODUCED Tesla’s Stock reached 661.77(USD) which is nearly close to 50k in INR. Credits: Tesla.com Now the company’s market cap is 627.29B(USD) which is double the addition of (Toyato+ Volkswagen+ Ford) annual Market Capital which is absolutely marvelous growth in 2020. The Company also achieved a mile tone of producing the large of cars in 2020 it produced nearly half a million of cars in 2020. Credits: Statista.com As the Company’s CEO Elon Musk Stated in 2010 that the Company will produce nearly half a million by 2020. The Company also yet to construct its Giga factory in Berlin and Texas especially for the production of CYBERTRUCK .But interestingly the company stopped its Model S and X production for 18 days .So hopefully we expect a new design from Tesla by 2021. Until then Tesla had a Wonderful growth in 2020 since the Company was actually started in 2003 . Credits: nextbigfuture.com In 2020, Technological Companies really had a good profit in the while comparing their four quaters in 2019. So hopefully we expect a good start in 2021 both in technology as well as in health.
https://medium.com/@haribalaji-r4/tesla-started-manufacturing-its-model-y-evs-in-giga-shanghai-9da21b37dbd7
['Haribalaji R']
2020-12-27 15:05:44.436000+00:00
['Technology News', 'Elon Musk', 'Nasdaq', 'Tesla']
936
Is Deno a Threat to Node?
Is Deno a Threat to Node? Deno 1.0 was launched on May 13, 2020, by Ryan Dahl — the creator of Node Image copyrights Deno team — deno.land It’s been around for two years now. We’re hearing the term Deno, and the developer community, especially the JavaScript community, is quite excited since it’s coming from the author of Node, Ryan Dahl. In this article, we’ll discuss a brief history of Deno and Node along with their salient features and popularity. Deno was announced at JSConf EU 2018 by Ryan Dahl in his talk “10 Things I Regret About Node.js.” In his talk, Ryan mentioned his regrets about the initial design decisions with Node. JSConf EU 2018 — YouTube In his JSConf presentation, he explained his regrets while developing Node, like not sticking with promises, security, the build system (GYP), package.json and node_modules , etc. But in the same presentation, after explaining all the regret, he launched his new work named Deno. It was in the process of development then. But on 13th May 2020, around two years later, Deno 1.0 was launched by Ryan and the team (Ryan Dahl, Bert Belder, and Bartek Iwańczuk). So let’s talk about some features of Deno.
https://medium.com/better-programming/is-deno-a-threat-to-node-1ec3f177b73c
['Kapil Raghuwanshi']
2020-07-14 08:29:17.382000+00:00
['JavaScript', 'Startup', 'Technology', 'Nodejs', 'Programming']
937
Interaction Design Lessons from Oblivion — Why It Matters
But, why study sci-fi interfaces at all? This seems like an interesting hobby — if you’ve got the time — but what does this have to do with my real job? I was never really able to give a good, well-thought-out answer to why I spend so much time studying sci-fi interfaces before — or for that matter, why it would be a good idea for an established business to spend time looking at them. But a few years ago during an email exchange between authors Olli Sulopuisto of Nonfiktio and Chris Noessel that very question was asked: “…is studying movie UIs, I dunno, useful?… Have you learned something from sci-fi interfaces that would’ve been more difficult or impossible to gain by other means?” So, in response, Chris wrote an article, in which he gives 8 tangible benefits for studying sci-fi-interfaces. I highly recommend you read the full article yourself, but I’ll give you a quick run-down of what he says. 1. You build necessary skepticism You might want the things you design and build to be like the stuff you see in the movies, but some of the interfaces could cause disastrous consequences in the real world. So, studying these interfaces critically builds up your design immune system. 2. You get their good ideas This doesn’t happen all the time, but every now and then you’ll find a concept that can inform your work. The films that inspire these good ideas can behave like an advisor on your team that is only focusing on the blue sky thinking. 3. You can turn their bad ideas into good ones Star Wars: Episode IV (1977) Chris uses the gunner seat in the Millennium Falcon as a favorite example for this. In the real world, sound can’t travel through space, yet every time Han Solo fires the gun, you can hear the sound of a laser firing and things exploding. But, if you really think about it, it’s actually pretty smart since you could turn it around to say that the sound is added on purpose to give some user feedback — so he knows the gun’s firing properly. This type of exercise requires the technique of apologetics, where you carefully study what’s broken in the design and try to figure out why it’s actually brilliant. 4. You can avoid their mistakes Of course, Sci-fi interface designers have a different goal than real-world designers. That goal is to entertain, so they don’t necessarily have to care about the same things we do. So, when they get an interface wrong they can get it very wrong. Seeing how the actors interact with these design fails can be instructive and teach us what not to do in our own designs. 5. It’s great analytical and design practice Chris testifies that since he has been reviewing over 100 years worth of sci-fi interfaces, he has gotten really good at quickly and thoroughly being able to review interfaces in the real world. While I definitely do not have his track record for hours spent analyzing sci-fi interfaces, this still holds true for me as well since I began studying them. For example… The Amazing Spider-man (2012) This interface from Spider-man gets less than 2 seconds of screen time, but it only took me those 2 seconds to look at this interface and see there were major problems. When the lab is under attack, Gwen “flips” this “emergency lever” by four-finger-swiping a touchscreen. Fortunately for everyone in the lab, Gwen is a level-headed individual — and got the warning from Spidey in plenty of time to — calmly — walk to the touchscreen and accurately swipe the “lever”. But what if the power had gone out and the emergency generator failed? What if for some reason there was low visibility and she couldn’t see the screen? Or what if for some reason she had lost the use of her hands and couldn’t effectively four-finger-swipe? These are serious problems with a touchscreen in this situation. Why not just use an analog lever? The cool, high-tech factor doesn’t make up for the poor design choices for this context. All of that went through my head almost immediately after seeing this very short segment of the film. All thanks to the practice I’ve had taking a closer, more analytical look at the designs. 6. It’s speculative-tech literacy It helps you become more literate in future, speculative tech. Even though you may not realize it, sci-fi is a major — sometimes subtle, sometimes not so subtle — influence on how designers and users think about interfaces. Minority Report (2002) Minority Report is almost 20 years old, but if someone references the pre-crime scrubber, chances are you know exactly what they’re talking about. But, what’s good about it? What’s bad about it? What would you tell your client or design partner if they wanted to use it as a model for something you’re working on in the real world? Having an understanding of these interfaces can definitely be a benefit when designing for stuff that doesn’t exist yet. 7. Its blind spots are rich mines Comparing sci-fi tech with real-world tech helps lead to an understanding of the blind spots, and helps us see what we need to be thinking about but aren’t. 8. It inspires big thinking If we as designers only work with what we know can be done with the materials we’re familiar with, there would be no real transformative innovation — only incremental improvements to what has come before. While it is important to continuously iterate upon designs that have come before, if you want to disrupt an industry for the better, you need to remove the constraints of how it’s always been done before. You do still need to know where we’ve been since that is an important lesson in how you can make things better in the future and to keep yourselves from repeating the same mistakes. At the same time, you shouldn’t allow that to confine you into the predefined proverbial box. Analyzing sci-fi interfaces gives you a way to dream bigger and imagine what things would be like if the sky was the limit — and to imagine what it would be like if it could change the world for the better. Meta and Magic Leap in the past, as well as Microsoft and a few other big companies have been looking at how augmented reality could be the new way people interact with technology and with each other. Iron Man 2 (2010) Some experts even believe this will replace mobile phones in the very near future. It wouldn’t be like Minority Report where your arms are in an unnatural position, but more like Stark’s lab. With augmented reality that’s done right, — emphasis on being done the right way — your body’s positioning is more natural — you’re manipulating objects as you would in real life. It’s the neural path of least resistance and their goal is for people to be able to compute with a zero learning curve for the interfaces. Could you get this type of inspiration somewhere other than sci-fi? What other medium other than sci-fi focuses so much on future technology and society in such an imaginative and visual way? Video games come close, but not quite since there are still the real-world constraints of the usability of the actual game and platform technology. All that said, sci-fi interfaces are a fun and inspiring way to study, to learn and to think big.
https://medium.com/pintsizedrobotninja/interaction-design-lessons-from-oblivion-why-it-matters-de544e262f88
['Aleatha Singleton']
2020-04-23 23:31:37.437000+00:00
['Interaction Design', 'UX Design', 'Future Technology', 'Sci Fi Interface']
938
How To Enable Remote Desktop In VMware?
You can enable Virtual Machine in VMware following different methods. You can get it done manually with the use of the VMware Workstation Program. You can also consider installing remote access software while enabling it manually. If you require access just within the local network, and also you do not desire to have access to the host machine all the time, it becomes a lot easier to get the remote desktop connections enabled. You can enable remote desktop connections through the operating system and also within the VMware network settings regularly as per your convenience. At first, you need to get the port set up done in VMware. It will help the program know about what needs to be done when the request is sent through remote desktop applications. Configuring VMware For Remote Desktop Connections At first, the VMware needs to be configured so that the RDP requests can be forwarded to the IP address of the virtual machine. The steps mentioned below are required to be followed for successful configuration. Take a look: Also, read Managed Citrix VDI Desktop for virtualization solution. Step 1: First, we need to go to the menu. From there, we need to click on the Settings button. After this, we need to go to the hardware tab and then select Network Adapter. From there, you need to define the connecting type by selecting NAT. Step 2: Now you need to go to the command prompt from your virtual machine. You need to then enter your IP configuration and look for the value that is following the IPv4 address — the record of the same needs to be kept for the later steps. Step 3: The next step is to go to the menu and then click on Edit. From there, you need to select Virtual Network Editor. After this, you need to choose the NAT network type and select NAT Settings. Step 4: You need to click on Add from the new prompt, and then the new port forwarder needs to be included. You need to fill in the information as: Host Port: 9997 (It is an open port number. If you are not certain which number will be taken into usage, you can choose the one provided above. ) After this, you need to provide the Type as TCP and then enter the IP that was recorded in Step 2 in the Virtual machine IP address. You need to provide a Virtual machine port number as 3389. This port number is by default and can be modified through registry editing. If you need to, then save the open prompts. This will allow the changes in the configuration to take place. Step 5: It’s the final step where you need to get the RDP connections enabled from the operating system. The approach is different in all the Windows versions. Windows 8.1: In this OS, you need to look for the one who can use the remote from the start menu. This will help trim down the results. Windows 7: Here you go to Start Menu and then search for Remote Desktop. After this, you need to select users who have remote desktop applications. Windows XP: Here, you need to right-click on the My Computer option from the start menu. After this, you need to move to the remote settings. Get Connected To Virtual Machine With RDP You can connect with the virtual machine just the way it is with all the other systems. All you need to do is to go to the Start menu and then open Mstsc. After this, you need to provide the computer name or IP address and then click on the Connect option. As soon as you provide your Login details, you will get connected! Wrap Up In this way, you will be able to enable remote desktop in VMware. Connecting with a computer is a lot easier, be it virtual or not, with the use of Windows Remote Desktop. So, get started with the steps now!
https://medium.com/@harshpahwa21/how-to-enable-remote-desktop-in-vmware-14686ab1753
['Harsh Pahwa']
2021-08-09 14:10:51.758000+00:00
['Technology', 'Business', 'Computers', 'How To', 'Remote Working']
939
A single tree that grows 40 different kinds of fruit
This magical tree grows 40 different kinds of fruit This can be hard to believe for you but this single tree grows over 40 different types of fruits including peaches, plums, apricots, nectarines, cherries and almonds. This special tree is created by the Syracuse University Professor Sam Van Aken using a technique called Grafting which many of us had heard in our biology classes back in the school. He says while elaborating this technique “ The idea came from just sort of a fascination with the process of grafting when I had seen it done as a child, it was Dr. Seuss and Frankenstein and just about everything fantastic I started traveling around Central New York in New York State to look for different varieties of stone fruit. Eventually I was able to find these different heirloom and antique varieties but they are very rare so it would bring them back here to my nursery and graph them onto a tree so that I could continue to use them, now, I have a huge collection of plums and apricots. Through the project I’ve worked with a lot of growers and at first they didn’t understand it because they were like why would you want to have a tree with that many different fruit on it you would have to go back over and over to continue to harvest all the fruits. This project is always an art project for me. I was really interested in the idea of a hoax; you know hoax transforms reality part of the idea for the tree of 40 fruit was to plant them in locations that people would sort of stumble upon. Once they happened upon one of these trees they would start to question why are the leaves shaped differently, why are they different colors, and then in summer when you would see all of these different fruit growing on them and of course in spring when they blossom in different colors it was like an artwork. When I first started, I just crafted the branches on so each variety blossoms at a slightly different time and I had a tree that blossomed all on one side but looked dead on the other. From that point I created a timeline of when all of these different varieties blossom in relationship to each other so I could essentially sculpt how the tree would blossom. For each of the trees I keep a map essentially or a diagram of the tree. It takes a really long time, I start a tree and I let it grow for about three years and at that point I can come in and start to graft on to those branches, those 4 branches become eight the next year 8 becomes 16 and 16 becomes 32. It’s essentially like an eight to nine-year process. Essentially what you’re doing with clear plastic on the grafted branches is you’re creating a greenhouse around the graft and so what it’ll do is all that humidity helps the graft heal in. The first tree was planted in 2011 and it has the 40 varieties but I anticipate it will be about 3 or 4 years before it’s at that peak and then peat blossom. Unlike any other artworks that I’ve made these things continuously evolved. I think one of the reasons why I’ve been able to keep it going for so long is that every year it’s something new and when you come out here and the trees are all in blossom it’s really kind of an amazing experience and you get different fruits all summer. “ Visit www.codexell.tech for more like this.
https://medium.com/@codexelltech/a-single-tree-that-grows-40-different-kinds-of-fruit-45570c95a3e1
['Codexell Tech']
2020-12-19 12:30:46.421000+00:00
['News', 'Biology', 'Science', 'Technology', 'Trees']
940
Calculating Mill Level Deforestation & Carbon Risk Scores Across the Palm Supply Chain
Calculating Mill Level Deforestation & Carbon Risk Scores Across the Palm Supply Chain Illustrating the power of the Descartes Labs Platform to explore agricultural traceability challenges Descartes Labs Follow Sep 8 · 5 min read Zoomed out view of mill-level carbon scores. Lower scores mean less nearby deforested area and estimated forest carbon loss. According to the World Wildlife Federation, palm oil can be found in almost 50% of all packaged supermarket goods. As demand continues to increase, global agriculture and manufacturing companies are searching for ways to determine risk factors for deforestation and carbon emissions in the palm supply chain. Adding risk factors and associated traceability can help verify sustainable sourcing commitments and provide tools to help non-compliant participants to adopt more sustainable practices. While there is some sourcing transparency from mills to manufacturers, a high number of third-party growers supply to these mills, comprising a challenge for NDPE compliance. Therefore, companies need to understand probable source locations, land-use histories, and aggregate sourcing practices for each mill in their supply chain. They also need to attribute ongoing deforestation activity to a given mill and supplier. However, there is currently no widely available information that traces palm throughout the supply chain. And there is no direct way of attributing palm harvested within a plantation to the mill that eventually processes it. Further, concession ownership and permits have little transparency, making it challenging to attribute deforestation to specific growers. Still, there are a few factors that bound the sourcing region for a given palm mill (e.g. distance, transportation network, mill capacity, nearby mill capacity, etc.), that could allow us to estimate the probability and risk of a given mill processing palm from a given plantation. Palm concessions and mills over a subset of the island of Sumatra on top of a Sentinel-1 radar composite image Application of Deforestation & Forest Carbon Loss Risk Scores This post explores a relatively simple method to attribute risk from deforestation and forest carbon loss to individual mills throughout the palm supply chain in Southeast Asia. It applies deforestation and carbon loss risk scores to the entire Universal Mill List maintained by the World Resources Institute (WRI), Rainforest Alliance (RA), Proforest, and Daemeter Consulting. Our analysis leverages some key datasets and the compute power of the Descartes Labs Platform. The creation of these datasets in a singular development environment natively allows for the prediction of carbon equivalent of deforestation events. Each has been built with the other in mind. Sentinel-2 and GEDI derived forest carbon composite Sentinel-2 derived palm area growing mask Sentinel-1 InSAR derived deforestation detections Forest carbon predicted using an allometric equation from GEDI TCH at RH 95 Methodology and Replication First, we show how the methodology works for a given mill. Then we replicate the methodology over every mill in the Universal Mill List. We start by drawing a radius around each mill, quantifying the deforested area and forest carbon lost within the region over a set period of time, and then normalize by the total forest area and forest carbon. Creating a radius around each mill and displaying deforestation and forest carbon Total area deforested from July 2020 — December 2020: 3,355 ha Total forest carbon from July 2020 — December 2020: 575,478 Mg C Weighted Average Using Inverse Distance We also create a weighted average over the mill area using inverse distance such that deforested pixels closer to the mill are weighed more than those found farther from the mill. Creating a simple distance-weighted risk score for deforestation and forest carbon loss. Calculating Deforested Area Score & Carbon Loss Score Once we have our totals and our distance weights, we can calculate a deforested area score and carbon loss score for the mill in question. Scores generally range between 0 and 4, with higher scores indicating higher risk Deforested area score = 2.91 Distance-weighted average of deforested area ÷ maximum weighted average if the entire area was deforested Carbon loss score = 3.27 Distance-weighted average of carbon lost ÷ maximum weighted average if the entire area was deforested Applying the Descartes Labs Platform to the Full Universal Mill List Next, we use the Descartes Labs Platform to apply the same methodology to the Universal Mill List within Indonesia and Malaysia. Visual Results Using Descartes Labs Workflows API Finally, we can visualize it all together using Descartes Labs Workflows API. Zoomed out view of mill-level carbon scores. Lower scores mean less nearby deforested area and estimated forest carbon loss. Histograms showing the frequency distribution of total deforested area, total carbon lost, and the overall area & carbon scores for each mill When we zoom in on a particular location, we see the layers for each dataset along with the deforested area and carbon scores for nearby mills. Note that the size of the buffer around each mill has been reduced to prevent overlap during visualization. Using the Dataset to Inform Supply Chain Risk Assessment Now that the dataset has been created, it can be used to inform supply chain risk assessment for any company that sources from the Universal Mill List. To summarize, we built a mill-level deforestation & carbon emissions risk score across Malaysia and Indonesia. To do this, we leveraged proprietary Descartes Labs datasets to determine historical deforestation activity and carbon loss potential based on a distance-weighted formula to each mill. This produces a dataset that allows mills to be ranked by deforested area score and carbon loss score to determine focus areas for ongoing sustainability decisions. Click here to download a datasheet with more details about our Tropical Deforestation Monitoring Package.
https://medium.com/descarteslabs-team/calculating-mill-level-deforestation-carbon-risk-scores-across-the-palm-supply-chain-a117ea31e874
['Descartes Labs']
2021-09-14 03:28:30.649000+00:00
['Satellite Technology', 'Sustainability', 'Geospatial', 'Deforestation', 'Data Science']
941
Here’s How Technology Connects Us
I live overseas from my family. Google Maps tells me I am 4,101 miles away from them. That is a long old way from a hug from my parents, a walk with my dog, or a coffee with my friends. One of my best friends from high school, who I’m still in touch with, lives in Seattle. That’s even farther, especially counting the eight hours of time difference between us. My friends from college all scattered across the country, and one is even in Australia at the moment. The other day I read an article about how technology these days is turning us into antisocial beings who prefer to send emojis or memes rather than getting together or sending physical letters. Photo by Debby Hudson on Unsplash That point of view does make some sense to me. After all, the convenience of texting or just “keeping up” via Facebook updates far surpasses actually getting together or having a phone call. It’s frustrating when you want to see someone in person but they can’t be bothered to do more than just text. But as the world grows bigger and we spread further, technology is what keeps the world small.
https://medium.com/the-ascent/heres-how-technology-connects-us-10bc02124742
['Zulie Rane']
2019-05-19 16:30:58.038000+00:00
['Relationships', 'Technology', 'Social Media', 'Life Lessons', 'Communication']
942
Technology • Innovation • Publishing — Issue #156
Innovation hbr.org HT @fgilbane Most companies don’t have a clear path to success. But that might change with a new, data-centric approach to AI. By @AndrewYNg #AI Some K-pop stars are bypassing Twitter and Facebook to create platforms for fans, like Universe, which features AI-generated voice calls with the idols — www.theverge.com Korea’s fancafes have given rise to new social platforms like LYSN, Weverse, and Universe, which serve as self-contained versions of Facebook or Twitter entirely for K-pop fans. #AI Technology Facebook Wants Us to Live in the Metaverse — www.newyorker.com What does that even mean? stratechery.com Interesting Take on Metaverses via @stratechery #metaverse Gaming braces for its Netflix and Spotify moment — www.protocol.com A new wave of game subscriptions has only just begun emerging in the last few years. Now, as these subscriptions are married to nascent but fast-growing cloud gaming services that allow you to stream games to almost any screen, the industry is bracing itself for a potential paradigm shift akin to what happened to television, film and music. mitsloan.mit.edu Having built new norms during COVID-19, firms should now focus on operations, employee experience, customer experience, and organizational culture. Via @MITSloan techcrunch.com Twitter may have shut down its Stories features known as Fleets, but the Stories format will continue to invade other social platforms. TikTok today confirmed it’s piloting a new feature, TikTok Stories, which will allow it to explore additional ways for its community to bring their creative ideas to life. Apple wins patent for in-screen Touch ID and Face ID — 9to5mac.com #Apple #patents www.adexchanger.com #Amazon How Amazon plans to cut waste after backlash over destruction of unused items — apple.news Amazon has launched two programs as part of an effort to give products a second life when they get returned to businesses that sell items on its platform. Amazon will pay you $10 in credit for your palm print biometrics — techcrunch.com The retail giant has a spotty history with biometric data. libn.com The online shopping giant is pushing landlords to give its drivers the ability to unlock apartment-building doors themselves with a mobile device. www.axios.com A 9/11 documentary, “The Outsider,” will debut on Facebook Live for $3.99 on Aug. 19. #Facebook www.bloomberg.com Facebook Inc. has disabled the personal accounts of a group of New York University researchers studying political ads on the social network, claiming they are scraping data in violation of the company’s terms of service. adage.com RT @adage Alphabet Inc.’s Google was accused in an antitrust lawsuit of giving itself the edge in online advertising by cutting a cozy deal with Facebook that gives the social network an advantage in virtual auctions which determine whose ads appear where. www.theregister.com ‘This is peak Chrome; a reasonably good idea hampered because it was pushed out thoughtlessly’ www.mediapost.com HT @MediaPost www.technologyreview.com MT @michellemanafy Many TikTok users assumed that the text-to-speech voice they heard on the app wasn’t a real person. It was. A Canadian voice actor named Bev Standing had never given ByteDance, the company that owns TikTok, permission to use it. By @histoftech Publishing & Media publishingperspectives.com A total 2,661 authors, illustrators, and translators question UK ‘reconsideration’ on post-Brexit copyright and parallel imports on books. #Copyright @Porter_Anderson @PublishersAssoc @Soc_of_Authors www.bincfoundation.org MT @BincFoundation Join us in celebrating Binc’s 25th Anniversary! Thursday, August 12, 2021, 8pm EST #ThinkBinc #Bincat25 www.publishersweekly.com HT @BoSacks The International Publishers Association and PEN have documented hundreds of attacks on press freedom, many of them targeting LGBTQ and dissident groups. Scribd Originals Issues Short “Semi-Autobiographical Tale” By Margaret Atwood — www.prnewswire.com www.booknetcanada.ca Pearson CEO predicts demise of physical textbooks as digital service launches — money.yahoo.com The 177-year-old company debuted its Pearson+ subscription service in the U.S. deadline.com Private equity deals like the recent $900 million takeover of Hello Sunshine are poised to continue long into the future, dealmakers say. www.niemanlab.org “No matter the amount of tweeting or social media promotion that you might do, [the place] where you need to look to grow your audience is existing podcasts.” #podcasting www.theverge.com HBO Max is becoming more of a podcast app. This fall, it’ll be the exclusive home for Batman: The Audio Adventures, a scripted podcast starring Jeffrey Wright and Rosario Dawson. www.adweek.com The two companies have formed Condé Nast Certified Video Plus, providing advertisers access to both audiences through one offering. www.adweek.com This will be a year-long content-sharing collaboration to cross-pollinate the Bloomberg Equality and Ebony audiences through original content. digiday.com Good Housekeeping set a standard at Hearst UK that the rest of the portfolio wants to replicate. wwd.com wwd.com In recent years, a number of media brands have been tapping into the beauty market. #magazines apnews.com Twitter signed a deal with The Associated Press and Reuters to help elevate accurate information on its platform. Twitter said the program will expand its existing work to help explain why certain subjects are trending on the site, to show information and news from trusted resources and to debunk misinformation. www.reuters.com AT&T Inc’s satellite television provider, DirecTV, will become a standalone video business as part of a deal between the wireless service provider and buyout firm TPG Capital. www.nytimes.com The company expects to end the year with about 8.5 million. In its quarterly results, it reported holding nearly $1 billion in cash. variety.com News Corp will pay $1.15 billion in cash for Oil Price Information Service (OPIS), a specialized, digital-centric info provider. www.hollywoodreporter.com HT @LWShanley @publishingtrend LinkedIn Acquires Tutorial Video App Jumprope as it Looks to Expand its Creator Tools — www.socialmediatoday.com Revel Acquires The Woolfer, the Leading Social Platform for Women Over 40; Announces $3.5M Seed Round — www.prnewswire.com Resources & Opportunities Scholarships — Hugo House — hugohouse.org MT @diversebooks Writers: @HugoHouse is offering writing classes both in-person and online, along with scholarships for their Fall Quarter courses! Applications close on August 16th:. New Visions Writing Contest for Writers of Color and Indigenous/Native Writers — www.leeandlow.com RT @diversebooks Unpublished BIPOC writers: @LEEandLOW has extended the deadlines for both its annual writing contests to August 15th! Winners receive $2000 + a standard publication contract. New Visions (MG/YA): http://ow.ly/kGM450FKw62 New Voices (Picture Books): http://ow.ly/B5Cw50FKw60 help.medium.com RT @rgay Medium is hosting a writing challenge. Details here: https://help.medium.com/hc/en-us/articles/4405083394455-Medium-Writers-Challenge-Contest-Official-Rules… And there are some FAQs about your intellectual property and the rights Medium is seeking. publishingperspectives.com HT @elizabethscraig The new award, with its inaugural presentation set for next summer, is open to submissions both in fiction and nonfiction. @pubperspectives #wkb84 Join us for Penguin Teen Summer Festival! — Penguin Teen — www.penguinteen.com #teamPRH www.nyfa.org RT @MadeinNY We invite NYC storytellers to apply for #NYCWomensFund for the opportunity to bring their films, docs, TV shows, music projects, and theater productions by, for, or about women to life on screen and stage. Apply now at @nyfacurrent: http://nyfa.org/NYCWomensFund. www.cbsnews.com MT @MarkLevineNYC Very cool: @Yelp now let’s you filter your search for restaurants, stores etc by whether all its staff are fully vaccinated, and whether they screen for vaccination. www.microcovid.org MT @Pistachio I’m planning to __________ but now I wonder if it’s safe given how Delta COVID spreads. How can i calculate the risks?
https://medium.com/@ksandler1/technology-innovation-publishing-issue-156-aa9e0c26d531
['Kathy Sandler']
2021-08-09 14:16:44.234000+00:00
['Podcasting', 'Technology', 'Media', 'Publishing', 'Innovation']
943
(Un)Tethered
Ah, Tether. The stablecoin above all stablecoins. The one coin you can trust, that’s “tethered” to the US Dollar…or not. For the uninitiated, Tether is a stablecoin intended to be pegged one to one to the US Dollar. The intent of the coin is to allow HODLers to “hold” US dollars without having to cash out to fiat. This also enables a holder, on paper, to avoid wild price swings seen in other currencies. Tether worked — for quite some time. However, this one to one pegging flailed this week, with the price of Tether actually dropping below $1. What gives? Demand could be dropping, which can lead, among other things, to a price drop. With a sell off of Tether, the price will drop, and this could lead to a self reinforcing cycle leading to impressions that Tether (USDT) is unstable. With Tether at 99 cents, it becomes more expensive to purchase anything by using USDT. Think you’re unaffected, because you don’t own any of them tethers? Think again. According to researchers at the University of Texas, Tether is used by the larger crypto market to “provide price support” — so even if you don’t own any USDT, a sneeze for Tether could be a disease for the market as a whole. Furthermore, other stablecoins have been aiming to overtake Tether while it is currently in a weak spot. Tether has also been dogged by controversy: There was some sketchiness in accusations saying that Tethers were created out of thin air along with Tether burning 500 million coins this week. Tether claims to be backed by actual USD, but in reality have actually never been truly audited. In the end, you have to approach all of this with caution — be wary of many things in cryptoland, but especially be careful of your good ol USDT. After all, a dollar may not actually be “a dollar”! For more articles like these and crypto-related events in the SF Bay Area, please sign up for our free weekly newsletter at thepublickey.io. — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — Our sponsors: Are you a developer seeking to dive into the blockchain space where you can have a disruptive impact on society and make a high-end six-figure salary? There are currently 14 open jobs for every Blockchain Engineer. Condense months of independent struggle and research into just one week with a curated Blockchain development experience as part of The Den’s premiership course: Blockchain Professional Developer Bootcamp. Get a $300 discount with the Key ‘s promo code: Monero39.
https://medium.com/thepublickey/un-tethered-4b1adbae8470
['The Public Key']
2018-10-28 22:02:26.768000+00:00
['Blockchain Technology', 'Cryptocurrency', 'Blockchain', 'Ethereum', 'Bitcoin']
944
Mapping and Managing Your Digital Transformation Journey
Traditional freight brokerages are adopting powerful new technologies at an increasingly faster pace. Technology is the foundation for building better engagement and making your brokerage “stickier” with the carrier community in a capacity challenged market. This session explores how evolving technology is impacting brokers, and the key operational challenges that can be solved — and deliver rapid ROI — with the right plan and approach to building out your brokerage’s technology capabilities. Watch these short clips from the interview to learn: How to evaluate partners, map out a plan, and prioritize technologies that offer “quick wins.” The top 3 issues brokers should focus on and how resolving these can elevate the business. Why listening to the voice of the customer is critical to effective decisions and avoiding failure. Training and culture; bringing your workforce along, successfully managing change. Why technology projects are not one and done, the continuous improvement journey. Where is your digital transformation journey taking Becker Logistics? When planning your digital technology transformation journey, there are many factors to consider, including scalability and bringing your customers with you on the journey. Jim Becker from Becker Logistics discusses the importance of scaling with the technology and working on substantial growth over time rather than just jumping in 100% right off the bat. You first need to test out the technology and make sure that it’s working properly for your business, your customers and your carriers to make each of you profitable. As you start to create a plan for your digital technology transformation journey, the first think you need to think about is what your digital strategy looks like. The platform you are adopting should fit with your strategy in 3 main categories: Prasad Gollapalli from Trucker Tools dives into the process of planning your digital technology journey and the most important things to focus on when starting to choose what platform you should adopt. Jim Becker from Becker Logistics talks about the importance of stakeholders internally and externally and how every business decision, such as their choice to adopt digital freight matching, effects every stakeholder involved in the business. The 2 main benefits a technology must have are a live connection with carriers and real-time data. However, a lot of brokers get caught up in the buzz words, or “noise”, when they are selecting a new technology and end up with a solution that doesn’t match their strategy. Prasad Gollapalli of Trucker Tools discusses how taking away “noise” and focusing on the benefits of a digital platform will help you narrow down your choices and select the right technology for you as you are transitioning to a new technology. “If you aren’t on a digital platform you are going to be left behind” — Jim Becker, CEO, Becker Logistics “If you aren’t on a digital platform you are going to be left behind” — Jim Becker, CEO, Becker Logistics Jim Becker of Becker Logistics dives into the process of how they plan to get up to 90% of their freight booked through digital freight matching and how Trucker Tools has helped them improve their business strategy and efficiency with the predictive data we provide. What are the biggest lessons learned in Becker Logistics’ digital transformation journey? “If you don’t invest in the future, you wont have a future” — Jim Becker, CEO, Becker Logistics Jim Becker from Becker Logistics stresses the importance of investing in the future and not being afraid to spend the capital to improve your business. Prasad Gollapalli from Trucker Tools also discusses the importance of having a clear goal of where you are going and using technology to reach that goal. Technology is the connectivity between you and your carriers and will help to build that network and keep everyone happy and profitable. To watch the full interview, click the video below.
https://medium.com/trucker-tools/mapping-and-managing-your-digital-transformation-journey-8081c3170c4f
['Tracy Neill']
2021-06-10 20:18:15.360000+00:00
['Shipping', 'Broker', 'Transportation', 'Technology', 'Logistics']
945
100 Words On….. Knowledge
Photo by Kuma Kum on Unsplash I have been “cautioned” over the years about giving away free knowledge through blogs, social media posts, and articles. While disguised as advice, their intent has always appeared to come from a hidden agenda to package my thoughts and sell them to the highest bidder. My intent has always been to be thought provoking and to engage in discussions around the digital challenges that face humanity. Action is more likely when driven by internal motivation rather than external direction. For Cyber Security challenges, I am sure we agree that we need all the help we can get, free or otherwise.
https://medium.com/the-100-words-project/100-words-on-knowledge-5e6549a1b520
['Digitally Vicarious']
2020-12-17 04:43:40.086000+00:00
['Knowledge', 'Information Technology', '100 Words Project', 'Intellectual Property', 'Education']
946
WatcH Argentina vs Peru Live Stream Soccer
WATCH Live Streaming Argentina vs Peru Live FIFA World Cup Qualifying — CONMEBOL Full HD Soccer [ULTRAᴴᴰ1080p] Argentina vs. Peru 2020| Live Stream,. Argentina vs. Peru broadcast free all around the world. Visit here to get up-to-the-minute sports news coverage, scores…t.c Argentina vs. Peru Watch Live Streaming: “ Argentina vs. Peru: CONMEBOL WCQ live stream, TV channel, how to watch online, news, odds, start time VISIT HERE>>> https://tinyurl.com/y5bwmsj3 MATCH INFO Start date: 18 Nov 2020 06:30 Location: Lima Venue: Estadio Nacional Referee: Wilmar Roldan, Colombia Livestreaming, what’s in it for us? Technology has advanced significantly since the first internet livestream but we still turn to video for almost everything. Let’s take a brief look at why livestreaming has been held back so far, and what tech innovations will propel livestreaming to the forefront of internet culture. Right now livestreaming is limited to just a few applications for mass public use and the rest are targeted towards businesses. Livestreaming is to today what home computers were in the early 1980s. The world of livestreaming is waiting for a metaphorical VIC-20, a very popular product that will make live streaming as popular as video through iterations and competition. Image for post Shared Video Do you remember when YouTube wasn’t the YouTube you know today? In 2005, when Steve Chen, Chad Hurley, and Jawed Karim activated the domain “” they had a vision. Inspired by the lack of easily accessible video clips online, the creators of YouTube saw a world where people could instantly access videos on the internet without having to download files or search for hours for the right clip. Allegedly inspired by the site “Hot or Not”, YouTube originally began as a dating site (think 80s video dating), but without a large ingress of dating videos, they opted to accept any video submission. And as we all know, that fateful decision changed all of our lives forever. Because of YouTube, the world that YouTube was born in no longer exists. The ability to share videos on the scale permitted by YouTube has brought us closer to the “global village” than I’d wager anyone thought realistically possible. And now with technologies like Starlink, we are moving closer and closer to that eventuality. Although the shared video will never become a legacy technology, before long it will truly have to share the stage with its sibling, livestreaming. Although livestreaming is over 20 years old, it hasn’t gained the incredible worldwide adoption YouTube has. This is largely due to infrastructure issues such as latency, quality, and cost. Latency is a priority when it comes to livestreams. Latency is the time it takes for a video to be captured and point a, and viewed at point b. In livestreaming this is done through an encoder-decoder function. Video and audio are captured and turned into code, the code specifies which colours display, when, for how long, and how bright. The code is then sent to the destination, such as a streaming site, where it is decoded into colours and audio again and then displayed on a device like a cell phone. The delay between the image being captured, the code being generated, transmitted, decoded, and played is consistently decreasing. It is now possible to stream content reliably with less than 3 seconds of latency. Sub-second latency is also common and within the next 20 or so years we may witness the last cable broadcast (or perhaps cable will be relegated to the niche market of CB radios, landlines, and AM transmissions). On average, the latency associated with a cable broadcast is about 6 seconds. This is mainly due to limitations on broadcasts coming from the FCC or another similar organization in the interests of censorship. In terms of real-life, however, a 6 second delay on a broadcast is not that big of a deal. In all honesty a few hours’ delay wouldn’t spell the doom of mankind. But for certain types of broadcasts such as election results or sporting events, latency must be kept at a minimum to maximize the viability of the broadcast. Sensitive Content is Hard to Monitor Advances in AI technologies like computer vision have changed the landscape of internet broadcasting. Before too long, algorithms will be better able to prevent sensitive and inappropriate content from being broadcast across the internet on livestreaming platforms. Due to the sheer volume of streams it is much harder to monitor and contain internet broadcasts than it is cable, but we are very near a point where the ability to reliably detect and interrupt inappropriate broadcasts instantaneously. Currently, the majority of content is monitored by humans. And as we’ve learned over the last 50 or so years, computers and machines are much more reliable and consistent than humans could ever be. Everything is moving to an automated space and content moderation is not far behind. We simply don’t have the human resources to monitor every livestream, but with AI we won’t need it. Video Quality In the last decade we have seen video quality move from 720p to 1080p to 4K and beyond. I can personally remember a time when 480p was standard and 720p was considered a luxury reserved for only the most well funded YouTube videos. But times have changed and people expect video quality of at least 720p. Live streaming has always had issues meeting the demands of video quality. When watching streams on platforms like Twitch, the video can cut out, lag, drop in quality, and stutter all within about 45 seconds. Of course this isn’t as rampant now as it once was, however, sudden drops in quality will likely be a thorn in the side of live streams for years to come. Internet Speeds Perhaps the most common issue one needs to tackle when watching a live stream is their internet speed. Drops in video quality and connection are often due to the quality of the internet connection between the streamer and the viewer. Depending on the location of the parties involved, their distance from the server, and allocated connection speed the stream may experience some errors. And that’s just annoying. Here is a list of the recommended connection speeds for 3 of the most popular streaming applications: Facebook Live recommends a max bit rate of 4,000 kbps, plus a max audio bit rate of 128 kbps. YouTube Live recommends a range between 1,500 and 4,000 kbps for video, plus 128 kbps for audio. Twitch recommends a range between 2,500 and 4,000 kbps for video, plus up to 160 kbps for audio. Live streams are typically available for those of us with good internet. Every day more people are enjoying high quality speeds provided by fibre optic lines, but it will be a while until these lines can truly penetrate rural and less populated areas. Perhaps when that day comes we will see an upsurge of streaming coming from these areas. Language Barrier You can pause and rewind a video if you didn’t understand or hear something, and many video sharing platforms provide the option for subtitles. But you don’t really get that with a live stream. Pausing and rewinding an ongoing stream defeats the purpose of watching a stream. However, the day is soon approaching where we will be able to watch streams, in our own native language with subtitles, even if the streamer speaks something else. Microsoft Azure’s Cognitive Speech Services can give livestreaming platforms an edge in the future as it allows for speech to be automatically translated from language to language. The ability to watch a livestream in real time, with the added benefit of accurate subtitles in one’s own language, will also assist language learners in deciphering spontaneous speech. Monetization One of the most damning features of a live stream is the inherent difficulty in monetizing it. As mentioned before, videos can be paused and ads inserted. In videos, sponsored segments can be bought where the creators of the video read lines provided to them. Ads can run before videos etc. But in the case of a spontaneous live stream sponsored content will stick out. In the case of platforms like YouTube there are ways around ads. Ad blockers, the skip ad button, the deplorable premium account, and fast forwarding through sponsored segments all work together to limit the insane amount of ads we see every day. But in the case of a live stream, ads are a bit more difficult. Live streaming platforms could implement sponsored overlays and borders or a similar graphical method of advertising, but the inclusion of screen shrinking add-ons like that may cause issues on smaller devices where screen size is already limited. Monthly subscriptions are already the norm, but in the case of a live streaming platform (Twitch Prime not withstanding), it may be difficult for consumers to see the benefit in paying for a service that is by nature unscheduled and unpredictable. Live streams are great for quick entertainment, but as they can go on for hours at a time, re-watching streamed content is inherently time consuming. For this reason, many streamers cut their recorded streams down and upload them to platforms like YouTube where they are monetized through a partnership program. It is likely that for other streaming platforms to really take off, they would need to partner with a larger company and offer services similar to Amazon and Twitch. What Might the Future of Livestreaming Look Like? It is difficult to say, as it is with any speculation about the future. Technologies change and advance beyond the scope of our imaginations virtually every decade. But one thing that is almost a certainty is the continued advancement in our communications infrastructure. Fibre optic lines are being run to smaller towns and cities. Services like Google Fiber, which is now only available at 1 gigabit per second, have shown the current capabilities of our internet infrastructure. As services like this expand we can expect to see a large increase in the number of users seeking streams as the service they expect to interact with will be more stable than it currently is now. Livestreaming, at the moment, is used frequently by gamers and Esports and hasn’t yet seen the mass commercial expansion that is coming. The future of live streaming is on its way. For clues for how it may be in North America we can look to Asia (taobao). Currently, livestreaming is quite popular in the East in terms of a phenomenon that hasn’t quite taken hold on us Westerners, Live Commerce. With retail stores closing left and right, we can’t expect Amazon to pick up all of the slack (as much as I’m sure they would like to). …
https://medium.com/@messivsperuliveon/watch-argentina-vs-peru-live-stream-soccer-f1375bcbb03e
['Argentina Vs Peru Live Tv']
2020-11-18 00:20:34.736000+00:00
['Technology', 'Live Streaming', 'Social Media', 'Sports', 'Argentina']
947
Steve Jobs: The Confluence of Artistry and Technology
Steve Jobs: The Confluence of Artistry and Technology Story of a brutally honest artist who deeply cared about his creations. Yash Mishra Follow Aug 27 · 9 min read Photo by Florian Doppler from Pexels Steve had humble beginnings as young chap. His parents did jobs like bookkeeper and repo-man and were just able to make the ends meet. He used to get bored at school lectures but loved to see his father carving beautiful cars out of junk as hobby. He would stay on frequent fasts, fruitarian diets for weeks, would go anywhere barefooted due to his inclination towards Zen spiritualism yet could be as bratty as tripping acid and LSD occasionally. Once he travelled all the way from US to India in search of a spiritual guru for 7 months and voluntarily stayed in poverty throughout his travel. He dropped out of college and used to attend calligraphy classes. The boy knew how to get his work done even as a young chap as he once called the owner of HP to get a spare part and ended up getting a summer internship in HP. He was born to an unwed couple or as he called them his biological parents— Joanne Schieble and a young teaching assistant Abdulfattah John Jandali from Syrian origins. The couple could not get along because Joanne’s father was against their marriage. So much so that he had to be put up for adoption by his mother. Her mother placed a condition though, that adopting parents should be at least college graduates. But good stories become great because there is a twist and this one had one right in beginning. He was adopted by a lawyer couple which satisfied Joanne’s condition, but they later backed out because they wanted a girl and then he got adopted by a high school dropout Paul jobs and his wife Clara with a settlement with Joanne only after they pledged that they would save funds for the child’s college education. Paul and Clara jobs named their son as Steve. In his childhood days, Steve was fascinated by Paul’s craftsmanship. Paul loved to restore old cars as a hobby and he also used to sell them later. Paul and Clara always made him feel as he was special rather than adopted. Steve felt school lectures to be uninteresting not because he was weak but because he was indeed special. Once his parents were called and told that he seems uninterested at lectures to which Paul told his teachers “If you can’t keep him interested it’s your fault”. Then one of his teachers started bribing him to the homework and she used to give him questions from 2 grades above he was in and he would do that with ease. Later, the school administration told his parents to make him skip 2 grades, but his parents settled with skipping 1 grade. In college, Steve met another Steve. Steve Wozniak who famous for his wizardry in the class. He was a geeky prankster. He created a device that emitted TV signals and made the screen fuzzy. He would use it in a place where lot of people were watching TV together. People would think there is signal issue and would try to fix the antenna once someone tries to adjust it, he would clear the fuzziness. Just as the person moves away from the antenna, he would again make the screen fuzzy, and this go on and on till one person stands permanently with the antenna in one position. Known as Woz, Wozniak was a super talented at electronics. He built a device which connected a keyboard and TV. Everything you type on the keyboard gets visible on the TV screen. Not a big deal now but back then it was not less than a miracle to Jobs who saw it as a huge opportunity to build something of their own. He drove Woz and the device to Homebrew computer club where all technology enthusiasts came to watch new devices created by hobbyists. They displayed their device at a club and Steve Jobs with his great articulation got a hardware shop owner to order 50 such devices in $500 each paid at delivery. They had to buy parts worth $15000. They took a loan of $5000 from a friend’s father, further credit was arranged by jobs articulation and that’s how Apple was born, and that computer became Apple I. After their first successful delivery they developed Apple II which was a sleek machine and a super successful one as well which served as the cash cow for apple for a very long time. But Steve jobs didn’t liked Apple II. Why? Because it had more ports to connect multiple devices for the hobbyists. He always wanted end-to-end control of his product and even on its experience and those ports would never have allowed that kind of control. By this time, his on and off girlfriend Chrisann shifted with him in his 4-bedroom apartment. Steve was sharing this apartment with a friend named Dan Kottke. It was this time she got pregnant with Steve’s child. Interestingly Steve was 23 years old at the when Chrisann gave birth to a girl child. This was the same age when his biological father Jandali had him. His daughter was named Lisa and he never shared a very cordial relationship with her. Another commonalty between his biological parents and him was — Steve never married Chrisann. Apple went public and Steve jobs became a rich man but his quest for end-to-end control just became bigger. He applied a real growth hack when he convinced XEROX leadership to let him visit their R&D facility in return of an opportunity of 1-million-dollar investment bargain into Apple. Yes, it was a bargain!!! Apple engineers got to know about bitmapping which was the basis of GUI. The world was only aware of DOS black screen till then. This helped Apple create their own operating system which had GUI and which we know as Mac operating system today. Steve could be an absolute charmer and verbally seducing or he can be as ruthless as calling a full-blown effort of his team as complete shit. He would leave no stone unturned to maintain end-to-end control over the user experience and not let his device become a toy for the hobbyist. So much so that even the screws for the system that had Macintosh were designed in a way that it would not open with a standard screwdriver. He would hackle over curves or lines on computer design for days until the product feels perfect to him. He could go to any limit to make his point on design. Once he took a colleague on drive to show that rounded rectangles are everywhere, and the device should also be in that shape. It was in Steve’s nature to look the world in binary as either something is the world’s most beautiful thing or complete shit. He charmed/articulated Jhon Sculley Pepsi CEO to join Apple and take it to next level. He soon realized that Sculley was not a product person but Sculley was always under the illusion that he and Steve were very similar people. Some internal coup politics, some of Jobs bratty behavior and some management decisions like removing Jobs from Macintosh division led to a showdown between him and Sculley ending up in Steve being shown the door from the company he created. After being ousted from Apple, Jobs created another company called NeXT which showed a lot of promise courtesy, Steve’s amazing marketing skills, but it didn’t had much to showcase on the ground. Interestingly, Steve paid a hefty amount of $100,000 for getting the logo designed for NeXT. By now everyone was aware of his love for the intersection of creativity and technology which made one of his friend’s meet the Pixar owners. Earlier, he wanted Apple to buy it but was denied by the management. Later he went on to invest his own money and got a 70% stake. Pixar made some path-breaking, animated movies like Toy story trilogy, Cars etc. in collaboration with Disney. When Toy story 1 released Disney was having a great say in the partnership and Jobs never liked playing secondary fiddle to anyone. So, he took Pixar public and got the funds to make a good bargain with Disney now. Later, when Disney realized that they didn’t great creative people like Pixar had. They were left with no choice to buy Pixar at a very heavy price making Jobs the biggest shareholder of Disney. Jobs wanted to go back to Apple, but he wanted that Apple should call him back. Apple was going through pretty bad times and were only 90 days away from getting bankrupt. He willed himself into Apple by making apple buy NeXT. He became advisor to the board and worked for 2 years on $1 as salary per year. He dropped around 70% of the products Apple was producing and brought it down to just 3–4 products. He also changed the narrative of apple by “Think different” campaign. He managed to get the company out of troubled waters but that was just the beginning. It’s a notion that Steve saw himself as a rebel and he thought the rules doesn’t apply to him. He used to park his car in handicapped parking or sometimes parked it on the boundary lines which sucked space for 2 cars. Just for fun some Apple employees pasted boards at the same place which said “Park different”. Apple came up with iMac. The idea for the monitor design came up when Steve was gazing at a sunflower in his garden with his design alter ego Jony Ive. They improved the design in the next version of iMac by adding a translucent Bondi blue body to the monitor. They also added a handle to the monitor which was so nicely melted in the body that it became quite pleasing to the eye. It was a really stylish monitor. In 2003, he was diagnosed with pancreatic cancer and he was told to get his affairs right which was a subtle way of saying you don’t have much time left to live. He avoided the surgery to remove the cancer for 9 months treating it with his own fruitarian and herbal diet. Finally, he agreed to have the surgery 9 months later. He got healthy but seeing death with such close proximity made him more aggressive towards work. Next, they came up with the iPod with the tag line “1000 songs in your pocket”. Jobs always insisted to have simplicity in designs, and he would personally play with the wax models to get the feel of the device. He would give task to the team of coming up with an interface where the user must be able to find whatever he wants on the iPod with maximum of 3 clicks which led them to invent the beautiful trackwheel to scroll through the list of songs. They came up with iTunes store where you could buy any song for just 99cents. This helped the music industry get regain some of its market lost to piracy. Steve was very articulate and great at negotiations which helped him get many music labels on iTunes store. However, some labels wanted royalties from the iPod sales to get on iTunes, others some wanted to sell the complete album not fragmented songs. He never agreed to it but slowly and steadily most of the music was available on iTunes. This in turn propelled the sales of iPod even more. He became one of the most powerful persons in the music industry at that point. Then at an iconic product launch Steve came up with 3 products. An iPod, a mobile phone and an internet communication device. Steve then repeated the same and said this is all bundled in one device known as the iPhone. iPhone also went through a lot of design iterations they finally came up with one which had no keys. They also made the screen to extend from end to end and even patented the design that’s why only Apple phones has bazel-less screens. It was the first mobile to have gorilla glass as well. He was losing weight and was not in healthy state when his cancer resurfaced in 2008. This time he got a lever transplant, but the situation was such that his family was called in the hospital as if he will not make it any further. He again denied death to see his son go through graduation which was his longtime wish and kind of a deal with god to stay alive to see this day. He came back to launch the iPad, but he was so thin that lot of rumors about his unhealthiness surfaced and it became really messy when a rumor of his death spread. He later resigned as Apple CEO. Before his official departure Apple became the world’s most valuable company but the most fascinating characteristic was that he kept coming back with world class product throughout his life against all odds. He left behind a legacy on which Apple is still thriving, taking full responsibility of end-to-end user experience.
https://medium.com/geekculture/steve-jobs-the-confluence-of-artistry-and-technology-5769413782bc
['Yash Mishra']
2021-08-27 20:32:59.266000+00:00
['Lifehacks', 'Apple', 'Steve Jobs', 'Inspiration', 'Technology']
948
Blue-Green Deployment with Azure
Today, one of the biggest challenges for software development teams is to deliver the code at high frequency without any downtime on production. As a result of their agile ways of working, product teams deploy releases to production more often. Month- or year-long release cycles are becoming rare these days. The business benefits of shorter deployment cycles are clear: time-to-market is reduced, customers get product value early, customer feedback also flows back into the product team faster, and eventually, the overall team morale goes up. In this podcast, we talk about one of the most popular deployment patterns — Blue-Green deployment. We touch upon our experiences of implementing Blue-Green on the Azure platform, its benefits, and how we achieved zero downtime deployments using some of the Azure platform capabilities. Speakers: Anutosh Yadav, Vice President, Technology Nitin Jain, Senior Specialist Platform, Engineering
https://medium.com/engineered-publicis-sapient/blue-green-deployment-with-azure-9f59bc15046a
['Publicis Sapient']
2020-06-09 04:46:14.918000+00:00
['Engineering', 'Deployment', 'Technology', 'Azure', 'Software Development']
949
Merged Mining in Jax.Network
Security in a sharded blockchain network In the past decade, many different approaches to solving the blockchain scalability problem have been proposed. Sharding is considered to be one of the most promising approaches. However, there is no common vision that establishes how sharding should be implemented since every sharding design offers advantages and drawbacks. In our previous post we discussed sharding design in JaxNet and its advantages. One major problem that should be addressed by any scaling solution is security against 51% of attacks in shards. Indeed, the naive approach for scaling the network is to run an independent chain on every shard so that every mining rig can mine only one shard at a time. In this case, the network hash rate has to be split between shards. If there are N shards, then, on average, shard chains will have only an Nth part of the network’s hash rate. So, if there are 100 shards, then the malicious actor who poses only one percent of the network hash rate can take over some shards and conduct double-spend attacks. Such vulnerability is obviously unacceptable. Researchers consider different approaches to secure a sharded network. Some projects propose security solutions that rely on committees of trusted nodes. A consensus is reached by nodes through voting in these committees. However, such solutions often achieve security by reducing the decentralization of the network. Another popular approach is to select shard committees of random validators based on the number of coins they lock in stakes. According to this method, whenever the participant locks funds in the stake, a random shard is assigned to this stake. So participants don’t know upfront in which shards will become validators. Thus it’s harder for them to collude for an attack. This is a standard aspect of protocols developed for projects, such as Ethereum 2.0, Algorand, Cardano, Elrond, and others. Proponents of these solutions claim that randomness is the only thing that could prevent the malicious actor from accumulating a hash rate in one shard. Also, they claim that the Proof of Work consensus used in Bitcoin doesn’t suit their solution, and only the Proof of Stake could provide a degree of randomness that is required for a committee selection resistant to a Sybil Attack. Interestingly, these claims are not supported by rigorous argument. Researchers often appeal to common sense and intuition in line with our everyday experience. Some researchers stress the fact there are no other known solutions. Others argue that “simply speaking it makes it true.” However, this PoS approach is imperfect. As we highlighted in one of our previous posts shard committees, from time to time, need to reorganize, which is required to prevent attacks from adaptive adversaries who are able to gain control of the shard committee by corrupting its members. This reorganization is very costly for weak nodes since they have to download the data from other shards. This reorganization requirement greatly reduces or even eliminates the benefits from sharding. Some researchers claim that sharding based on PoS brings O(N) improvement, in terms of scalability, where N is the number of shards. However, due to reorganization, this estimate is inaccurate and misleading. Other researchers admit the problem and design solutions with a compromise between performance and the frequency of committee reorganization. Although these estimates are not thoroughly optimistic, they are inaccurate and misleading too. The problem is that the concept of “stake” is often conflated with that of “validator.” Obviously, every participant can control many stakes. So the number of stakes is a very confusing parameter for any estimation. Another issue of PoS sharding is economics. Participants who control many stakes and run full nodes on all shards have a significant advantage over others who can’t afford the maintenance of all shards. Therefore, this flaw contributes to centralization in the network. The lesson that we have to learn from these proposals is the following: the design that secures shards from a shard take-over attack has to be aligned with proper economics that doesn’t penalize weak nodes in the network. The main achievement of the solution proposed in JaxNet is the balance that doesn’t cause consolidation of power in the network. Merged mining In contrast to mainstream sharding proposals, the JaxNet approach is based on a Proof of Work consensus. JaxNet uses the merge-mining technique to secure shards from shard take-over attacks. The term “merged mining,” or “merge-mining,” often refers to the process of mining two or more cryptocurrencies at the same time. People are familiar with this term thanks to altcoins such as Namecoin, which is merge-mined with Bitcoin, and Dogecoin, which is merge-mined with Litecoin. However, in JaxNet, this term acquires another meaning since merge-mining, in this case, is used for mining multiple shards within the same network. Observers claim that merged mining is not a good candidate for the creation of proper sharding systems. Indeed, if there is a restriction that every miner has only one shard or a small fraction of shards, then the security of the network dramatically deteriorates. In this case honest miners are unable to support a high hash rate across all shards. In the opposite scenario, where each miner mines nearly every shard in the network, every node executes nearly the same amount of work as a full node. In this edge case scenario, there is no benefit from sharding. Moreover, a single chain with large blocks can do the job better than this network. If we track the history of the usage of merged mining, we see that merged mining often went through one of these two edge scenarios. The additional income from merge-mining Namecoin or other small coins is often negligible compared to the basic income from Bitcoin mining. So miners can ignore this merged mining option. In contrast, the additional profit from the merged mining of Litecoin with Dogecoin is significant. So, in this case, miners often take advantage of the merge-mining option. As a result, Dogecoin gains nearly 91% of the hash-rate from Litecoin. However, in this case, the unified network of these two coins can do the job better than merged mining. Therefore, the creation of a proper sharding system based on a Proof of Work and merged mining is a rather tricky task. The sensible blockchain solution should be attractive to both big mining pools and small mining farms with limited resources. The main feature of the sharding implementation in JaxNet is the proper system of incentives that reward participants according to their effort in network maintenance. As a result, Jax.Network preserves the high level of decentralization, even on a global scale. The Solution implemented in JaxNet We have described the purpose of merged mining in JaxNet. For many people, it’s often a mystery how merged mining works and how it is possible to merge-mine multiple coins simultaneously. Let’s discuss step by step how it is actually implemented in JaxNet. First, the miner chooses the subset of shards for mining. Then he constructs block candidates for every shard chain. The structure of these block candidates is rather similar to the one used in Bitcoin: they have a block body with transactions and a small block header that includes important data for constructing the ledger. Then the miner puts these block headers into the leaves of the Merkle tree. It is a commonly used data structure that ensures the immutability of data in its leaves. Nobody can modify these block headers without the Merkle tree root modification. In JaxNet we call this instance of the Merkle tree the Shard Merkle Tree (MMT). Once the Shard Merkle Tree root is calculated, the miner puts it into the data container that we call the BC container. This container holds the nonce field. The task of the miner is to pick up a value of the nonce so that the hash will be “good.” Mining in JaxNet is somewhat similar to mining in Bitcoin, as it is a repetitive process of grinding possible values of the nonce field. However, there is a difference with Bitcoin mining. Every shard in JaxNet has its own subset of “good hashes.” Hash can be good enough to sign the block candidate for one shard, and it could be bad to sign the block candidate for another shard. Moreover, in JaxNet, subsets of good hashes are disjointed. So the hash that is good for the one shard is always bad for all other shards. Once the miner has found a good hash for a particular shard block candidate, he can broadcast it to the network along with the proof of his work. The miner’s Proof of Work includes the header of the shard block candidate, the BC container, and the inclusion proof. This inclusion proof is the Merkle proof that the shard block header is associated with the target BC container. If a malicious actor attempts to counterfeit the shard block after the mining, his malicious actions will affect the validity of the MMT root that is included in the BC container. The immutability of the BC container is guaranteed by the Proof of Work in the form of its good hash. As a result, the miner is able to mine multiple shard chains simultaneously. It remains to determine how he is rewarded for his work. As we know from the first part of the paper, if the reward for the mining is issued regardless of how the merge-mining was executed, there will be a negative impact on blockchain economics. Therefore, the block reward in JaxNet is always adjusted based on the number of shards that were mined by the miner who crafted the Proof of Work for the given block. The question is how other nodes learn how merge-mining was executed by the particular miner. The answer to this question is Merged Mining Proof (MMP). The miner broadcasts it along with his block candidate. This data record could be treated as a sort of Zero-Knowledge Proof since it demonstrates the evidence of the fact without disclosing the whole mining process. From a technical viewpoint, it’s a compressed part of MMT that we call orange subtree. The immutability of this record is guaranteed by the MMT root. Thanks to MMP, it becomes possible to reward miners proportionally to the actual Proof of Work that was executed, and, therefore, to fix the blockchain economics. The block reward function in JaxNet will be discussed in the next post.
https://medium.com/jax-network/merged-mining-in-jax-network-d8f707708dac
[]
2020-09-29 11:11:47.107000+00:00
['Jaxnetwork', 'Blockchain Development', 'Jaxnetworktech', 'Blockchain Technology', 'Merged Mining']
950
The AirPod Connect Sound Is Beautiful
I spend a lot of my time in Google hangouts. This confers a number of benefits, connecting me with my boss and other members of the Medium team, which is largely based in San Francisco. I live in New York and have colleagues and direct reports in many U.S. cities. This technology brings us closer, but it also introduces friction. Shitty AV is a daily frustration, and the sound of people’s voices coming out of a laptop speaker is the actual soundtrack in hell. Wired earphones are a drag. Those gigantic wireless ones take up too much room in the handbag and they mess up your hair. You know where this is going. I am told I am a Xennial (’78) but I identify firmly as a Gen Xer, mainly because I like talking on the phone for hours at a time. I stopped talking on the phone several years ago. We all did, because of smartphones and texting and the other ways we waste our time. (Also, we dislike the humid, slippery sensation that comes from holding up a phone to our cheek for more than a few minutes at a go.) So I text, but I prefer talking. I have text-tone misunderstandings, we all do. I connect less with friends who suck at text, we all do. I try to strengthen my emoji game. You know where this is going. The first iPod came out a month after I moved to New York. I was a big music lover, working at a music magazine, and my boyfriend at the time got me the white one. I remember walking around New York alone, listening to music, feeling powerful and free, and like anything could happen. It was like I was sharing stolen moments with some other part of myself that emerged only in these conditions. The world felt gigantic. Over time, I would lose earbuds with more frequency than a deli umbrella, or the cord would become too tangled to reach my back pocket and so fuck it. Later, I’d hear there was a new Dreamville record and I would wonder how it was. I didn’t know who Post Malone was (maybe for the best). It took me forever to listen to Lemonade. You know where this is going.
https://onezero.medium.com/the-airpod-connect-sound-is-beautiful-8283f16cc4ab
["Siobhan O'Connor"]
2019-08-22 19:55:27.737000+00:00
['Self', 'Airpods', 'Digital Life', 'Technology', 'Culture']
951
Polkadot and Substrate explained (on the back of a napkin)
What is Polkadot? Why blockchain interoperability is crucial for mainstream adoption? What is Substrate, and why it is important? How can we make building blockchains as easy as deploying Wordpress? These are the questions I’ve tried to answer on the back of a napkin. Enjoy!
https://medium.com/sandor-report/polkadot-and-substrate-explained-on-the-back-of-a-napkin-e5d027c7c33e
['Torsten', 'Quadrant Protocol']
2019-02-17 04:38:04.561000+00:00
['Blockchain Technology', 'Substrate', 'Polkadot', 'Ethereum', 'Polkadot Network']
952
Raised US$89M, TNG’s Alex Kong Shares Tips of Building Success Team
Alex Kong is the founder and CEO of TNG FinTech Group, Inc – a Hong-Kong based company that provides next-generation financial services to 1.2 billion unbanked individuals throughout Asia. Bill is the Founder and CEO of StackTrek, a company specializing in using algorithms and data to build and scale programming teams for tech companies. Each week, Bill talks with top executives about startups, culture, and tech hiring. Bill: When did you found TNG? Alex: We incorporated the company in 2012, but we launched our services in November 2015. Bill: How many employees when you first launched your service, and how many you have now? Alex: When we first launched our service in 2015, we had less than 30 people, and today we have close to 400 employees across 14 countries. Bill: So your role in the company has changed from day one to now. How has it changed? Alex: I look at managing my company, it’s like we’re going through different phases of corporate life cycle, just like human beings. When we first started to launch our service, we are like a baby, so we are at the stage of baby. So, the way we manage a baby, a 20-people company, to 50 people, to 100, 200, 300, 400, are very different. Just like human beings, we go through different corporate life cycle. My role changes so fast because we grow so fast. We don’t even bother to print our position or title on the business card, because the role keep changing. It’s an on-going challenge. It’s also an on-going change management, because the way we manage the business and the people are very different compare to day one. From the freedom of working anytime, come to the office anytime, come to work at 11 a.m., go to lunch anytime to now in which we become very systematic. You better come on time, and go to lunch on time. A lot more professional and a lot more systematic. There are different phases of growth but to me, I’m excited about the unique DNA that we have created because of the rapid growth of the company, so we built a culture of obsoleting ourselves. Every week, our people, our different departments will discuss, sit down and discuss what happened, what we did last week, and what are the small changes we can do this week. And we work on their improvement week after week, and then the business keeps growing. Bill: You’ve mentioned company culture. Can you share with us what your company culture is like? Alex: It’s about survival. We have very little cash, and with that little cash, how do we survive? And we have to do anything to pay the rent, pay the salary. We didn’t talk about culture at all, we just work, work, work, day and night, and through that, we kind of built a culture of survival. Now, the company is profitable. We have a lot more resources that we can dedicate for benefits and rewards that help us cultivate certain culture. For example, recently we turned an entire office floor into a dedicated co-working-like space that promotes collaboration. It has an in-house cafe that serves coffees, sandwiches, soups, healthy drinks, etc. We now provide a lot more fringe benefits, and stop asking people to come on time. We don’t enforce the on-time policy. We don’t believe in that. But if they come on time, we reward them for something more. There’s a lot more freedom. Bill: So it’s a very rewarding culture. Alex: Yes. We want to make the culture rewarding because we believe people by nature want to work hard. We want to build a happy house — a happy environment that people look forward to come to, and collaborate with each other, and together create a solution and platform that serves as many as billions of the unbanked population around the world. Our mission is to bank the unbanked. Helping the unfortunate people who couldn’t open a bank account to gain access to banking or financial services. We need people to believe in our mission. We need people to understand that we are doing something great together. And while doing it, we are enjoying every moment of it. Bill: So as the company grows, you don’t see everybody as often as possible, especially everybody. So yesterday, I was talking to Tony Fernandes. He’s managing 21,000 people, but everybody has a direct line to him. They can text him directly. In your case, how can you ensure that as the company grows, they still feel like Alex is still part of the team? Since they don’t see your face every day. Alex: We are in a very virtual environment. I have 123,000 unread emails, and thousands of unread messages. People still send or copy me in messages, but I don’t really read every single one of them. Anything emergency, they will call me. And every time they come to me, it’s for a decision. So my job is, every moment in the office, one after another is making important decisions. And I delegate down to my second-liner and third-liner, to entrust them to make the relevant decisions and create a policy, and create a system with check and balance. So you have to entrust the people to carry out the job. They cannot depend everything on me anymore, only come to me for important decisions. Bill: Can you share any tips with leaders who want to build a successful team? Alex: There are a lot of things that you need to build a successful team, but one thing that has always stuck in my mind is: people that are part of the solution. I tell myself everyday: if I’m not part of the solution, then I’m part of the problem. When you first hear it, it sounds very harsh. I ask myself that question everyday, “Am I part of the solution?” If not, then I’ve became part of the problem. I share that to my people as well. When they come to me with a problem, I want them to think about solutions. You don’t come to me with just a problem. You have to come to me with proposed solutions. If you only come to me with your problem, then you yourself become part of the problem. Then what am I hiring you for? It’s harsh, but it’s a necessary evil. When we build teams, we need to make sure that we are building with people that can be part of the solution.
https://medium.com/stacktrek/raised-us-89m-tngs-alex-kong-shares-tips-of-building-success-team-cf9db9b643d3
[]
2020-11-26 02:13:44.679000+00:00
['Technology', 'Recruiting', 'Hiring', 'Startup', 'Tech']
953
DevOps in Salesforce: Considerations for your deployments
Welcome to our fourth post in a series about building an effective Salesforce deployment pipeline that your whole team will love. Our first post introduced DevOps in Salesforce and our second covered the planning aspects to be considered. In our previous post, we looked at the implementation of a pipeline. This post describes some considerations for Salesforce deployments. Written by Aaron Allport and Stuart Grieve Effectively deploying profiles and permissions between environments Profiles are “unique” in how changes are migrated between environments because Salesforce will typically take the delta of changes in the profile that pertain to the artifacts being deployed. This, therefore, means that it is incredibly hard to keep profiles in-line across environments without the use of a dedicated tool to perform comparisons against profile extracts. Profiles have always been a challenge. I’ve spent more time than I’d like to admit with two monitors trying to compare profile markups to validate changes as part of releases. — Martin Gardner, Solution Principal It can quickly become infuriating when managing a profile in the repository (that is the source of truth) and seeing an entirely different result than expected after a deployment. Additionally, profiles in the repository need to be incredibly large as they need to contain explicit true/false values that pertain to the various settings and object/field security. This is necessary as these permissions change throughout the build of the configuration and code in any Salesforce project. Since the release of SFDX, Salesforce has advocated the use of Permission Sets, as these are immutable and explicit, yet more granular and manageable for a specific set of permissions. Whilst profiles won’t go away, there is a new way or reliably managing user permissions across different environments. Only use profiles for what cannot be contained within a permission set Salesforce Profiles contain a lot of information. Some of this information such as login hours and allowed IP range access cannot be defined in a Permission Set, and therefore needs to remain in a Profile. This Profile-specific information is the ONLY information that needs to reside in a profile. Given the nature of Permission Sets granting additional access, the default state for a given profile is no access to anything. It would be prudent to create this “no access” profile at the earliest stages of a project from which additional profiles can be created for each business role. By combining Profiles and Permission Set Groups (explained below) that are in-line with one another, automation can be applied to the assigning of permissions. With the immutable permission-sets that are contained within Permission Set Groups and aligned to a Profile (and therefore job function), permissions can be reliably managed at the source in the repository. From the outset there should be an agreed approach to profiles vs permission sets. Given the challenges of deploying profiles; permission sets is definitely the way to go. But they need to be documented, managed and maintained. Think about your admin setting up a new user, how do they know what permission set/permission set groups to assign? - Minesh Patel, Solution Principal Permission set groups are the new Profile With profiles now essentially “empty” of configuration save for the items that cannot be controlled by Permission Sets, Permission Set Groups are now effectively representing the grouped permissions that would have previously been held in the profile. By creating Permission Sets (which by their nature are granular), these can be grouped together to form a Permission Set Group representing a business function. What this practically means is that the repository will contain an equal amount of non-administrator profiles and Permission Set Groups, and a higher number of Permission Sets that represent the access required by users of Salesforce. These individual Permission Sets are re-used across different Permission Set Groups (such as a Permission Set to access and edit Accounts, Contacts, and Opportunities being re-used by the “Salesperson” and “Data Manager” Permission Set Groups). Automate permission set group assignment With a 1:1 ratio of Profile to Permission Set Group, it is possible to use automation to automatically assign a Permission Set Group to a user when they have a corresponding Profile assigned to them. The following describes the series of steps you’d implement in a User object trigger handler class: Determine if the change was to the ProfileId field Get the name of the profile being replaced (Profile object, using the old Profile ID) Get the name of the profile being assigned (Profile object, using the new Profile ID) Unassign the permission set group matching the name of the profile being replaced (PermissionSetGroup and PermissionSetAssignment objects) Assign the permission set group matching the name of the profile being assigned (PermissionSetGroup and PermissionSetAssignment objects) The logic could become more or less complex depending on the needs of your organization but can be a powerful tool in ensuring permission consistency between environments.
https://medium.com/slalom-technology/devops-in-salesforce-considerations-for-your-deployments-c0eb22d3cca4
['Aaron Allport']
2020-09-24 16:01:50.398000+00:00
['Technology', 'CRM', 'Salesforce', 'Release Management', 'DevOps']
954
Introducing Gift Cards
Introducing Gift Cards Season’s greetings, As we enter the holiday season, “we’re in for a hard winter” has stuck with me. It’s always been a hard winter for unsheltered individuals. Following the impact COVID-19 has had on homelessness this year, several more Samaritan Members in Seattle, OKC, and Orange County will sleep tonight without a home. Now more than ever we feel a responsibility to make sure everyone has the means to get warm food, protection from the cold, safe places to sleep, and hope. If hope is something you have this season, will you share it through a Samaritan gift card so that your friends and family can join you in investing in a Samaritan Member’s journey off the streets? To Give a Gift Card 1. On the Samaritan website, select an amount and enter “GIFT CARD” in the Special Instructions textbox. 👇 2. Enter payment method and click Donate. 3. Your digital gift card (like the one below 👇) and instructions will be emailed to you so your loved ones can invest in a Samaritan Member’s journey off the streets!
https://medium.com/samaritan-journal/introducing-gift-cards-b155a2474335
[]
2020-12-18 19:03:03.788000+00:00
['Nonprofit Technology', 'Giving', 'Social Entrepreneurship', 'Homeless', 'Holidays']
955
10 Startup Industry Trends to Watch in 2021
It’s pretty obvious that 2020 for many companies has been one they can’t wait to put in the rearview. Not only does it give more meaning to the expression “hindsight is 2020” but it also has been a year of painful but necessary transformation. However, for other industries this year has been an innovation opportunity. So what are the startup industry trends to watch next year? While there have been some clear winners and losers in 2020, the coming year and an emerging vaccine will mean additional shifts and some continuing trends. Here are the top 10 startup industry trends to watch in 2021. Biotech Biotech and wearables have become more prevalent than ever. Providing information to your doctor through an app, doing a sleep study with your Apple Watch, and even monitoring your blood sugar with a bracelet or your health with a ring has become commonplace. But there is more development to be done in both hardware and software, and biotech is up to the challenge. Take, for example, DNA testing. While the family tree research uses have started to slide, other trends are emerging to determine what you do next. For example, DNA Nudge offers a cheek swab test, an app, and a wristband. They work together to offer you shopping and nutrition suggestions based on your DNA. There’s more to DNA than selecting the right dinner menu. Artificial intelligence and DNA testing may result in the best recommendations for exercise routines, skincare products, and other health recommendations. Sustainable Finance As big automakers scramble to catch up with startups like Tesla, investors are showing how much they care about the environment through where they invest their money. Investing sustainably is an emerging trend, and companies who create sustainable funds for trading and direct investors away from carbon-wasters and legacy oil concerns are taking advantage of it. Think of it this way. Exon, one of the most valuable companies in the world at one time, has been targeted by an activist investor campaign to change directors. Meanwhile, Unilever has become the first company to give shareholders voting power on carbon reduction efforts. “Green” investing is a great place for startups to be in 2021. Not-Meat Speaking of sustainability, the non-meant trend to vegan meat substitutes is no longer just driven by health-conscious plant only dieters. Instead, it has become a mainstream topic on reducing emissions from one of the most carbon-emitting industries on the planet, the production of red meat. Brands like Incredible Foods and Beyond Meat are just two examples. More meat alternatives (can we hope ones that are equally as tasty?) are on the way. The African Market Due to political and other concerns, Africa was often considered too risky for most VC investors, but all of that is changing. If your startup has the solution for an international problem that starts in Africa, listen up. According to Partech, $2 billion in VC funding headed to the large continent last year. Twiga is just one example, a startup helping to create a farm to market distribution center to reduce or even eliminate harvest waste in a place where food insecurity is ridiculously high. And it’s not just about agriculture and food. Fintech startup Jumia, an AI-powered eCommerce platform, is just one of other emerging companies. Geographic Expansion Speaking of geography, new startups are emerging in Spain, Europe, and other markets building on the proven models of companies tested in the United States. For instance, while we have Uber Eats, DoorDash, and Grub Hub, Spain, North Africa, and even South Africa have Glovo. This attracts VC money because new companies are taking proven concepts into a new market ripe for innovation. No-Code App Builders First, you needed to have a website. Then a social media presence. Then both, plus a robust email list. But now if possible, you need to have an app. Notoriously app development has been expensive, but recently startups and companies like Zapier have made it possible for nearly anyone to develop an app without writing a single line of code. In other words, they have developed apps designed to develop apps. This is by no means a saturated space, and the development of more complex builders that can use AI to intuitively create usable and updateable applications affordably will continue to trend in 2021. The demand for apps won’t slow anytime soon, and apps that create apps free up developers for more complex (and profitable) tasks. The Simple Website for Everyone Wait, we already have these, right? Well, sort of, but Google is always moving the needle, and the new Google Core Web Vitals coming soon has WYSWIG editors and site builders on edge. But new startups are emerging with real solutions. Swipe lets you create beautiful, simple, responsive, and simple landing pages without knowing a single code command. Other builders similar to Webflow are emerging as well, and offer things like hosting services, email, and more all in one package. Many email list management programs and CRMs are taking similar approaches and offering an all-in-one that appeals to businesses small and large. The Sharing Economy Lest we think the sharing economy has somehow reached a peak, think again. Even Airbnb has recovered from 2020 nicely despite travel restrictions, but people are finding they have more to share than just their homes. For example, a startup called Cloud Kitchens offers innovative kitchen sharing for delivery-only restaurants, an emerging trend in 2020 that will likely continue into 2021. Ordering in has become much more popular than the night out, and these kitchens are a creative, sharing answer. From sharing cars and homes to sharing bikes, recreational equipment, and more, the sharing economy is just getting started. Delivery and Takeout Speaking of delivery and takeout, the age of the food truck has returned in whole new ways. Restaurants are popping up in those little drive-through coffee kiosks, nixing the dining room in favor of take out options. Restaurants have adapted to delivery only models as well. Getting food to go or delivered, as mentioned above, is a huge trend, and restaurants are finding ways to stay in business by changing menus to items that survive a 20-minute drive in the passenger seat of a Toyota well. This trend will fuel the new “date night at home” movement triggered by 2020 and will be a trend to watch in 2021. Telemedicine Medicine at a distance has always been possible, and even needed in rural areas, but 2020 pushed innovation in the space. Part of the reason is that insurance companies were finally on board. This has made room for new startups to enter the now viable field. And companies like Lensabl are ready. The online eyewear company offers an eye exam in a box, you can choose frames online, and they can even find you frames to fit your old lenses or make new lenses to fit your current frame. Cologaurd offers at-home testing for prostate cancer, and there are other at-home tests as well. It’s this type of innovation that will spur the next generation of telemedicine and testing at a distance. The revolution has just begun. Conclusion Many startups in emerging industries have been given new life in 2020 and will continue to thrive in 2021. Whether you are a startup or looking for investments, these will be trends to watch in the new year.
https://medium.com/@evolutionaccelerator/2010-startup-industry-trends-to-watch-in-2021-8584391d09c6
['Marsha Rogers']
2020-12-22 22:43:48.957000+00:00
['Trends', 'Small Business', 'Mentorship', 'Technology', 'Startup']
956
How To Build a Search Engine for OpenBazaar
by Chris Pacia, OpenBazaar Lead Backend Developer When we were designing OpenBazaar, now the world’s largest decentralized marketplace, we knew the general approach to creating a search engine that delivered high-quality results was to use a centralized server to index listings and run search queries. A centralized search experience, however, was completely out of line with the freedom and flexibility this open-source marketplace represents. We knew there had to be some intermediate step we could take. A Federated Approach We decided to do what few other platforms have done and open it up so that anyone can create a search engine that is compatible with the OpenBazaar app. This would allow us to start by building our own search engine to help users navigate the network but would relieve their total reliance on it. All the data that needs to be indexed is public and anyone can create and run a search engine of their own. Building a Search Engine for OpenBazaar I have written a brief overview of how to build a search engine for OpenBazaar and described some of the outstanding issues we are still working on. It does take some technical skill to build a search engine, but developers with previous experience building and maintaining online infrastructure should be able to create a simple version without too much difficulty. Step 1 — Finding Peers Step one is to build a database of all the peers you want index. There are several ways you can do this: 1) Actively crawl the network The openbazaar-go API provides a couple of APIs to make this easy. /ob/peers returns a list of peers you’re connected to: And /ob/closestpeers/<peerID> returns a list of “closer” peers from the DHT. One can crawl the network for more nodes just by iterating over your current peer database and fetching closer peers. You can also use the /ob/peerinfo/<peerID> endpoint to look at their IP address if you wanted to index only Tor stores, for example. 2) Passively listen for new peers For this you’d likely want several nodes running at the same time. From there you could stream peerIDs to your database every time one of your nodes receives a new connection. You might not get all peers this way but you should find most. openbazaar-go nodes do not expose an API which streams new peer connections so you’d have to patch a node to get this data. Creating such an API should be relatively easy and it’s something we are considering doing. 3) Subscribe to pubsub We are just now starting to make use of the libp2p’s pubsub capabilities. Currently only IPNS data is broadcast to pubsub channels but hopefully soon we will have all nodes announce new publishes over a pubsub channel. Normal users will not be subscribed to this data–as it is a lot of data–but for people who are building search engines this would be perfect. Depending on what data we include in the publish, you should be able to subscribe to the appropriate channel and receive streaming updates from every peer. We’ll be sure to announce and document this functionality when it is available. 4) Manual curation You could also just manually enter the peerIDs of stores you find interesting in your database. This isn’t actually a bad strategy if you want your search engine to target specific types of products. Complex machine learning algorithms would likely also work for this if you are particularly adventurous. Step 2 — Indexing Stores Now that you have a list of stores it’s time to index them. /ob/listings/<peerID> will give you a list of all the store’s listings. And if you grab the hash from a listing you can get the full details by using /ob/listing/<peerID>/<hash> You will also need the user’s profile which you can get from /ob/profile/<peerID> Then you can put the listings in whatever database you please to index and search them. Your API just needs to conform to the specification in OBIP2 and it will be compatible with the client. To see your search engine when your stores are indexed, enter the URL here in the OpenBazaar client: Step 3 — Keeping Content Fresh The biggest challenge with running a search engine is avoiding serving stale content. Given that OpenBazaar is still fairly new, users may come and go frequently. You want to avoid serving listings for stores where the owner has not been online in a long time to make sure to give potential buyers a fresh discovery experience. Further, listings are not guaranteed to persist on the network longer than a week after a vendor has gone offline. It still may be visible to some users due to caching, but not everyone may be able to see the listing. If you serve old listings, a potential buyer might click on it only to get a Not Found error. We recommend not serving listings from vendors which have not been identified as being online at least once within the past week. How do you determine that? Admittedly this is an area where we are still working to improve but there are a few ways. First, make sure your database of peerIDs also tracks the last seen or last good timestamp for the peer. Next, you have several ways you could update the timestamp: 1) Ping each peer The /ob/status/<peerID> endpoint will attempt to ping them and tell you if they are online. The problem, however, is that about 60% of nodes on the network are unreachable due to NAT traversal issues. Tor nodes tend to have even worse issues as many users run on operating systems like Tails and Whonix which lock down the control port and make it difficult to accept incoming connections. So this only gets you part of the way there. 2) Passively listen for incoming connections This is similar to 2) above in Finding Peers. You could update the timestamp for a peer every time they connect to you but again this will either require patching your node or for us to provide an API to get the data. This also will not cover every peer you’re interested in unless you run many nodes. 3) Disable caching If you set "UsePersistentCache": false in your node’s config file it will stop returning from cache and will do a new DHT crawl each time you request user data. If the data has dropped out of the network, you will know about it as the data will return not found. If it’s not found, you should generally not serve up that peer’s data. Additionally, you’ll likely also want to set "BackUpAPI": "" in the config as by default it will try to get the record from our gateway’s cache if the record cannot be found in the DHT. 4) Use IPNS record validity Every time a user publishes new content they publish an IPNS record containing an expiration date in the DHT. That expiration date is the time at which the user’s listings will drop out of the network absent them re-publishing which happens automatically when their node starts up and once a day thereafter. When you query for a user’s listings or profile or anything else, your node looks up the IPNS record. You could theoretically use this expiration date as a proxy for the last good timestamp for the peer and beyond that stop serving their listings. This data is not yet exposed in our API and it’s something that we have on our list to develop. 5) Use pubsub Currently, whenever you query for a peer’s data, it subscribes to streaming updates for that peer. This means you’ll be notified immediately whenever they publish a new IPNS record. Not only does this tell you when they have signed online, but you could also get the latest IPNS record validity. Currently, disabling cache–3) above–also disables pubsub and would make this method ineffective. This is another issue that we are working on improving. Step 4 — Making Money! It’s entirely possible for you to monetize your work on an OpenBazaar search engine by adding paid ads in your search results. In fact, you could pretty easy sell ad space for your search engine through your own OpenBazaar store! Issues and Improvements This is a general overview of the technical work required for building an OpenBazaar search engine. We’re aware of some areas we would like to improve upon such as creating an API that allows us to stream newly found peerIDs and also have APIs that allow us to inspect IPNS records or pubsub publishes to update our timestamp associated with each peer. We could likely combine all of this into one streaming API that pulls data from all these sources and streams not only new peerIDs that it finds, but also the most up-to-date timestamp for that peer that it is able to decipher. This way search engines only need a single API to update their peers database. We are constantly working to make it easier for anyone to jump right in and start building on the OpenBazaar platform. If you have any questions or comments about any of this, please join our conversation in Slack and tell us what you think.
https://medium.com/openbazaarproject/how-to-build-a-search-engine-for-openbazaar-5f190c47a284
[]
2018-09-07 16:28:54.916000+00:00
['Ecommerce Software', 'Bitcoin', 'Blockchain Technology', 'Software Development']
957
The Power of Game Design Analysis
The Power of Game Design Analysis I have been running my own gaming site, Game-Wisdom, for more than eight years now. Over that time, I’ve tried to raise consciousness around what it means to study game design beyond reviews. That means looking more deeply at whether or not a game’s design actually works in more objective terms. Some consider that very concept impossible. Whether or not a game does well or poorly in terms of sales (and even reviews) isn’t necessarily a reflection on its actual underlying design in the way many might think. In this piece, I want to delve further into this question of what it means to study and critique game design — and why developers should welcome this kind of analysis. Measuring design Several times in the past I’ve discussed what it means to review a game’s design and how that differs from a traditional “game review”. Here are the key questions a design analysis focuses on: What is the developer’s intent? Are there any elements that get in the way of it? Does the game keep the player invested until the end? The goal is to examine the gameplay to see if what’s there is enough to keep the player invested until the very end. Recently I touched on the idea of using achievements to track player progression in a game, and this is an extension of that idea. A game reviewer may be a critic who approaches games from the point of view of the average player — this makes sense, and it’s why players read and watch game reviews. In this case, players are asking questions like “should I buy this game?” rather than “is this game well-designed to achieve its goals?” Animation by ranganath krishnamani on Dribbble. But analyzing a game’s design is somewhat different from writing a typical review. In particular, it requires a specific understanding of user interface and user experience design (UI/UX), game design theories and practices, and the broader video game landscape. When thinking about game design as a framework, it’s important to bear in mind that it involves both art and science. Reviews — as distinct from analysis — will tend to focus on the art rather than the science. This isn’t a criticism of game reviews. Rather, the point is that game developers themselves aren’t getting the full story if they only consider reviews when thinking about the reaction to their game. When we analyze the design of a game, we aren’t interested in how many copies were sold, how many accolades were earned, or if fans and critics are calling it “the best game of all time.” Making a successful video game isn’t like wielding magic, nor is it a question of firing blindly in the dark and hoping to hit upon a pot of gold. When we cast a wide net and consider all of the games being released each month — both “good” and “bad” titles — we can start to detect patterns of success or failure. These patterns are hinting at a deeper truth that emerges when game design itself is analyzed through a more formal lens. The role of analysis Your video game is not special. It doesn’t matter how long you’ve spent building it, or if the idea came to you in a dream, or even if it’s the number one game being played on Twitch. There is no question, of course, that a creator has a special attachment to their project — especially in a field that overlaps so strongly with art and personal expression. But if you want to assess your game’s design — its success or failure on its own terms — then it’s necessary to be able to take an objective approach. This might sound obvious, but in practice, very few people are able to scrutinize their own work in this way. There are tried and true methods for injecting objectivity into the analysis process, of course. Play testing is perhaps the most important component here. No matter what you think about your game, you can only know by putting it in the hands of potential players and observing their unvarnished reactions. Companies like Nintendo have been following a principle like this since long before UX itself was a defined concept or practice (interestingly, this has also led to the company’s famous ability to “upend the tea table” — to completely discard a game project even if it’s well into development, if the evidence shows that it isn’t succeeding at meeting its design goals). Studying game design also matters. In my achievement analysis piece mentioned above, I pointed out that even critical darlings that might get nothing but praise can also suffer from horrible playtimes and completion ratings when you dig into their stats. The thing is, game design issues aren’t hard to spot if you know what you’re looking for. When I say that I can examine the quality of a game within 30 minutes, that’s not me being concieted; that’s just how long it takes to spot these problems. Issues that drive people away from a game aren’t usually hidden under hours of otherwise-stellar gameplay. Rather, they tend to show up within the first few minutes of play. This is precisely why we often see such a huge drop-off of players within the first hour or so of a game. If an issue is annoying 10 minutes in, it’ll remain that way throughout the whole experience. If I haven’t yet convinced you about the difference between game reviews and game design analysis, let me dig in a little further. Game design analysis goals The value of game design analysis may seem self-evident to many readers. But it has occasionally faced significant pushback, particularly thanks to fandom culture. People who attach their own worth to the products they consume can sometimes view legitimate criticism as a personal attack. Many of my design pieces on critically successful games have been attacked by fans. The most common complaints tend to be something like “You must not be good at the game to complain about it,” or “You’re just hating on something popular.” There are many indie games with very fervent fanbases who see any judgment on a title as constituting an attack or insult toward it. A question around this came up on my Discord server about these kinds of games: does criticism even matter to the creators? If the fanbase is so hardened and so supportive of a game, is it even worth pointing out issues or flaws? I would argue, of course, yes. No matter what game we’re talking about, there’s always an overall limit on the number of people available to buy and support it. Every issue or problem, however unintentional on the part of the developer, reduces the number of potential buyers. It’s sometimes the case that games with the most fanatical fans often struggle to gain traction in the wider market, resulting in low sales. In today’s market, there’s an abundance of choice for players. Nobody is truly stuck with a game they don’t like — the people who become annoyed by your game can refund it within two hours of purchase, and you’ll never see them buy a game from you again. Let that sink in for a moment. If you can make simple improvements to a game that can double or even triple the number of eyeballs it’ll attract — or the actual number of people willing to play it — that could make an enormous different, especially for an indie studio. One goal of game design analysis is to detect as many pain points as possible so that they can be prioritized and removed, ensuring that the strengths of the core gameplay loop are on full display. Again, nothing about this process is magical; it’s simply about understanding what attracts and retains players. So many developers do not understand — or choose to ignore — the fundamentals when it comes to designing a game, and they can often be confused about why their game isn’t selling. Releasing a video game is always risky. It’s up to developers to do what they can to mitigate those risks. Be it through strong design, play testing out problems, and marketing the game correctly — these risks can be managed. Show me any game where the developer isn’t sure why people aren’t buying it, and you’ll find the mystery can be solved within 30 minutes. Criticism and analysis also matter from a continuous improvement perspective. Understanding why a game succeeds or doesn’t is essential for growing as a designer. The idea is to learn from mistakes, so that they aren’t continually repeated in game after game. Game design is a craft, and like any craft, growth and improvement are important. Being able to understand some of the fundamentals of game design — and the ability to recognize and apply a framework — also means that developers don’t need to always re-invent the wheel. They can leverage proven techniques and ideas that have a solid track record in the market. Image by ranganath krishnamani on Dribbble. Future of game analysis In my view, game design analysis grows more important each year. The days where you could hope for a small number of people to find your game organically are gone. If you want your game to do well at all — let alone truly succeed — you’ll need to understand whether or not your design works. It’s the reason why I look at the full library from a studio — not just their biggest hit — when considering how successful they are. We often see studios that have one really strong game still go out of business. Often, it’s due to one of two things: Their first game was a critical darling, but it drove so many people away due to various problems that there was no fanbase for their other titles (or not enough fans to sustain the studio). They had a successful game but didn’t understand why it was successful, which led them to chasing other designs and burning out (or going through their finances) in the process. Not every game designer is going to become the next Sid Meier or Hideo Kojima. Your primary goal as a game developer is to make games that allow you to keep making games. Remember, it doesn’t matter if you create the “greatest game of all time” if nobody actually plays it and you go out of business afterwards. If you’re a game developer and you are interested in assistance with the kind of analysis I’m describing above, feel free to reach out to me. If you enjoyed my article, consider joining the Game-Wisdom Discord server. It’s open to everyone.
https://medium.com/super-jump/the-power-of-game-design-analysis-a25bf231aec1
['Josh Bycer']
2020-12-04 07:11:35.447000+00:00
['Gaming', 'UX', 'Game Design', 'Videogames', 'Technology']
958
Long Live Consumer VC
Many following venture capital trends, and just as importantly the mood and psyche of venture investors, will tell you this: “consumer investing is dead”. In fact, many early-stage investors have spent the last few months (well before the Covid-19 pandemic) questioning venture returns in the consumer space and therefore developing alternative theses and sources of deal-flow outside of it. In some ways, the trend is perhaps understandable: Throughout 2019 and into 2020, a number of high profile consumer IPOs have disappointed (think Casper, Uber, Lyft, and SmileDirectClub). In addition, the industry is now filled with rumors of under-performance by VC-backed consumer startups, most notably amongst DTC brands with extremely lofty valuations. The argument here is that the true cost of scale was business model unsustainability. Many investors seem to have given up on the largest consumer categories. For example, the prevailing view (thankfully, prevailing doesn’t mean unanimous) amongst venture investors and technology leaders is that consumer social has become un-investible. The thinking was simple: the network effects acquired by the largest players in social (Facebook and its affiliates, and to some extent Pinterest, Twitter, Linkedin and Snap) are too powerful to break through. A similar reasoning applies to a number of the largest consumer categories, including dating (“Tinder has won”), travel (“Airbnb rules the world”) and others. Now, as the world grapples with the first-, second-, and third-order effects of the Covid-19 pandemic, the binary outcomes of consumer investing seem like sub-optimal capital allocation strategies. “Better focus on more stable and high-margin B2B plays”, is what I hear. I don’t share this pessimism. To the contrary: my conviction is that consumer investing has never been so appealing in the last 5 years. There are many reasons underpinning this conviction; far too many in fact for a single post. In this post, I want to focus on just one of them: standing in 2020, we are at the cusp of a very rapid acceleration of shifts in spending patterns as a new generation of consumers with wildly different expectations enters the mainstream. Generational Change Means Potential 🦄 The facts are well known. People born between 1995 and 2010, which are typically considered “Gen-Z” (in so doing perhaps hiding significant differences between older/younger members of that “cohort”), will become in the next decade the largest consumer cohort in the US. That isn’t to say that they aren’t already consuming in vast amounts — either directly or through their parents — and presiding over the destiny of exceptional startups. However, that process is accelerating rapidly. It’s easy to underestimate the spending shifts that this will entail. For one thing, we tend to judge the prevalence of social and cultural phenomena at the yardstick of our own experience (“if I don’t like it, why would anybody else?”). Yet, spend half a day with a 10–15 year old and you’ll notice that there is something radically different about the way that generation lives through life compared to the way “we” did. They expect different things, engage with friends, family, strangers, and brands differently, and therefore won’t readily be natural consumers of products and brands that we take for granted. To put it differently, given the codes, habits, and hierarchies of values of that generation, many of today’s key players are losing cultural relevance much faster than they think– and with that brand equity, users, and profits will be progressively eaten away. Early 🏆 I’m not the only one to have identified this phenomenon. Far from it. Several consumer application builders have emerged to respond and, in many ways, fuel these trends. Take consumer social, for example. We are seeing a resurgence of new and fascinating apps in the consumer social space that have already acquired scale or cultural significance. A couple of examples come to mind. TikTok’s emergence as a worldwide cultural phenomenon in the last 36 months is clearly one of the most meaningful ones. It’s not the only example though. With a key utility than is intriguing to many (let your friends see where you are at all times — the bridge to meet your friends IRL), Zenly is now one of the most successful consumer social apps in the world, with tens of millions of users and was as of February 2020 the fastest growing social app in several markets. It is now 20–30 times larger than at the time of Snap’s acquisition in mid-2017. Where are the Next Consumer Ecosystems? 🌍 Interestingly, many of the inroads into consumer apps are happening where you wouldn’t expect them. Ecosystems like Paris and Berlin are proving to be exceptionally fertile grounds where very promising consumer social startups like Yubo, Yolo, and Jodel (amongst others) have emerged. Outside of Europe, pay close attention to “challenger” consumer ecosystems like New York City. At Secocha, we are extremely proud to be backing some of the most exciting consumer applications born in NYC, including Brigit, Bunches, Stacks, Wardrobe, and Jour. All that to say that if you’re working on a consumer startup (targeting Gen-Zs or not) or a believer in consumer investing, I’d love to chat. Stay tuned for more posts on this topic. PS — it’s my first blog post so any feedback welcome! 🙏
https://medium.com/secocha-ventures/long-live-consumer-vc-c86ad55aa915
[]
2020-06-22 19:26:17.978000+00:00
['Technology', 'Business', 'Venture Capital', 'Tech', 'Consumer']
959
Ten lessons from twelve years of AWS
#10 — Learn from others. Learning from others is the single most important thing I have learned. And I have to admit, sometimes I still have to remind myself to shut up and listen. There is always someone in the room that knows more than you do. That person is just not necessarily broadcasting it. Be open and ready to be challenged and change opinion. Share your ideas with others, challenge them and especially let others do so. Fortunately, there are myriads ways to do this — participating in this community is, of course, one of them. A few weeks ago, while preparing for this talk, I asked the developer advocate team to share with me their lessons — and few of them kindly answered. But as I didn’t really know what I was going to talk about a few weeks ago, I asked my question with a heavy bias towards getting technical answers — so despite my first lesson not to focus on technology, the following is mainly about technology — but I am the one to blame for that :) Javier Ramirez: Don’t use the AWS as a traditional Data Center. Have a consistent naming/tagging strategy early on. Especially for everything that has unique names (s3 buckets, for example). Alex Casalboni: Learn IAM before you do anything serious. Master Infrastructure as Code (IaC), either CloudFormation or Terraform. Use the management console only to build prototypes, or the first time you try out a new service, switch to IaC for anything else. Boaz Ziniman: Account security — don’t use the root account. Always enable MFA. Set IAM users for every developer, with different roles. Tag everything! Enrique Duvos: Security, Security, Security. Dennis Traub: Turn on CloudTrail. Turn on Guard Duty. Don’t assume that development teams will consider security when building on AWS. So, I’m with Enrique here —think security. Marcia Villalba: Try to see if there is a managed service before building it yourself. Danilo Poccia: Adopt the right mindset: With on-prem virtualization and hosting, you have a finite set of resources where you try to squeeze as many things as possible. With the cloud, you have access to a virtually unlimited set of resources, and you should use the minimum you need at any point in time. Sebastien Stormacq: Use EC2 only if you have exhausted all other possibilities. This is not because of EC2, but because no machine to manage is better than one machine to manage. So, go serverless as much as possible. And serverless is not only Lambda, but it is also RDS, Cognito, S3, API Gateway, etc. Ricardo Sueiras: Shift the conversation around AWS from technology to business outcomes (Agility, etc.). It will help you get the exec sponsor/support required for success. From a technical point of view, bet on automation. Resist being obsessed with the console. Isabel Huerga Ayza: Governance — everything that is not clearly defined will be done by no one. Or best case it will be, but not consistently. Goes to accountability and ownership, which is not good to leave to good faith when what is at risk is your business. Don’t wait until a production disruption to enable support. Setup budget alerts! Steven Bryen: My advice is to think differently about things like the dynamic allocation of resources. An example would be security groups. You can create a rule referencing another security group, which means new instances automatically match the rule and have access as they scale. It is very different from the traditional on-prem mindset but is so valuable. Cobus Bernard: Read the pricing page for a service and set up billing alerts.
https://medium.com/the-cloud-architect/ten-lessons-from-twelve-years-of-aws-1ba92fe5ff88
['Adrian Hornsby']
2020-07-08 07:51:25.493000+00:00
['Technology', 'Community', 'AWS', 'Lessons Learned', 'Keynote']
960
How new Mac keeps itself cool without fan?
The new apple mac has a lot of efficient heat management. It is recorded that it has good heat management than a PC having water cooling system. Explanation The fans in any laptop is connected to CPU through a copper metal heat pipe. This heat pipe is placed over the CPU. This heat pipe sucks only a little amount of heat from the CPU and then fan can also only a decrease the temperature only to a small extent. Heat sink in a normal laptop But now in new mac aluminum heat spreader is used which is way more efficient because it is directly connected to the CPU and aluminum is a very efficient heat sucker. Mac heat sinks But why aluminum The metal with most thermal conductivity is Silver and Copper but they are way more expensive so aluminum is the best to use. In relation to its weight is almost twice as good a conductor as copper. See the graph Thermal Conductivity of different metals 2. Consistency Thermal conductivity of aluminum in different temperature 3. Non-magnetic 4. Sound and Shock Absorption 5. Non-sparking 6. Recyclability
https://medium.com/@anmolmalikop/how-new-mac-keeps-itself-cool-without-fan-3a3a36676dc4
['Anmol Malik']
2020-12-06 15:04:00.082000+00:00
['Apple', 'Computer Science', 'Mac', 'Technology', 'Knowledge']
961
A Recap: Gitcoin at ETHDenver
A Recap: Gitcoin at ETHDenver The Burner Wallet, Kudos, and Grant Matching, oh my! Gitcoin was built in Colorado. It’s a big part of our culture and history, and we showed up ETHDenver in a big way to celebrate! 2019 has a lot of fantastic events lined up — and they all have to live up to high expectations set this weekend at ETHDenver. Set in the Sports Castle, 1,500 participants (including 750 hackers) shuttled into 6 floors including three hacking floors, a chill room, a Maker Space, and lots of interesting conversations about Ethereum and the future of the internet. We won’t be able to cover everything in this article, but did want to cover the fun experiments we had at the event! The BuffiDai Wallet: Buying Food With Crypto Austin Griffith, Director of Research at Gitcoin, unveiled the Burner Wallet earlier this year as a quick and easy onboarding to crypto payments, built using MakerDAO’s DAI and POA Network’s sidechain, xDai. At ETHDenver, the Burner Wallet rebranded as the BuffiDai Wallet and was used across ETHDenver to buy meals at the several great food trucks at the venue. Here’s a view of the wallet in use, from the lens of the wallet itself. 🤯 As it was used by each person at the event, it naturally was something folks thought about for hacks and happy hours. A particularly interesting project created a Burner Wallet for private transactions — while the MakerDAO Dappy Hour used the wallet to pay for beer at the event, while a monitor updated with the latest and greatest stats on purchases for the night. A highlight of the weekend and, arguably, the year, for crypto is the Burner Wallet. UX improvements are not coming — they are here. The ETHDenver Kudos Game!
https://medium.com/gitcoin/a-recap-gitcoin-at-ethdenver-1e48bfc93805
['Vivek Singh']
2019-02-22 23:09:40.409000+00:00
['Tech', 'Blockchain', 'Ethereum', 'Technology', 'Open Source']
962
Is Discord a Good Solution for Group Video Conferencing?
Is Discord a Good Solution for Group Video Conferencing? Move on over Zoom, or…? Photo by Chris Montgomery on Unsplash Discovering new technologies During the beginning of the 2020 global pandemic, I was recovering from a personal mental health crisis. In March of 2020, many things saved me; one of them was writing. I also gained immense support from a virtual writing group that started organically through Instagram and Zoom. It was one woman’s idea. And, it took off. We met weekly for several months. We were both a writing group and a moral support group. Writers came and went, while a core group of writers stayed on. In July there were some new arrivals. One of them touted the benefits of using Discord instead of Instagram and Zoom for our meetings. Honestly, I wasn’t excited to add another platform to my daily rotation. Some writers were eager. They saw the benefit of merging Zoom and Instagram into an all-purpose application. Others were hesitant. Ultimately, we decided to give Discord a try. Discord isn’t designed for writing groups to video chat. That’s the thing about Innovation — it takes people who are willing to think outside of obvious uses and limitations. For the techies who understand (not me) Developers describe Discord as “All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone”. Discord is a modern free voice & text chat app for groups of gamers. Our resilient Erlang backend running on the cloud has built in DDoS protection with automatic server failover. On the other hand, Zoom is detailed as “Video Conferencing, Web Conferencing, Webinars, Screen Sharing”. Zoom unifies cloud video conferencing, simple online meetings, and cross platform group chat into one easy-to-use platform. Our solution offers the best video, audio, and screen-sharing experience across Zoom Rooms, Windows, Mac, iOS, Android, and H.323/SIP room systems. -StackShare Photo by Zain Saleem on Unsplash Using Discord We have been utilizing Discord for our writing group for about a month now. As with all platforms, there are pros and cons. Pros: •Free •Screen sharing capability •Ability to separate threads of chats by subject. •Ability to video or voice chat easily at any time within the app •Ability to easily private message other group members Cons: •Running group videos seem to bulk down our computers and put them into overdrive. There are problems with freezing screens and hearing others talk/ being heard. •Adding another platform Conclusion For our writing group purposes, Discord makes sense for impromptu voice chats, private messaging, and sharing in group threads. It does not yet make sense for group video chats. For now, we will stick with Zoom for those as the quality of the calls is much better. Unfortunately, free Zoom video chats are limited in duration, so we must either pay the professional rate or make several links to keep our 2–3 hour meetings running as seamlessly as possible. And, we are now utilizing 3 platforms instead of 2, although Instagram looks to be the least used of the 3 since the switch to Discord for messaging. Here’s hoping Discord’s video hosting capability is improved soon for those of us who wish to use it as a video conferencing application instead of a gaming one. That will be truly innovative!
https://medium.com/the-innovation/is-discord-a-good-solution-for-group-video-conferencing-e8b6a1d6fa02
['Aimée Gramblin']
2020-08-05 20:19:52.441000+00:00
['Technology', 'Writing', 'Innovation', 'Discord', 'Zoom']
963
Understanding Java Streams
After having had a deep introduction to functional programming in my last article “A new Java functional style”, I think it’s now time to look at Java Streams more in depth and understand how they work internally. This can be something very important when working with Streams if our performance is going to be impacted. You’ll be able to see how much easier and efficient is processing sequences of elements with Java Streams compared to the “old way” of doing things and how nice is to write code using fluent interfaces. You can now say good-bye to error-prone code, full of boilerplate code and clutter that was making our lives as developers much more complicated. Let’s start by having a brief introduction to Java Streams first! Introduction Java Streams are basically a pipeline of aggregate operations that can be applied to process a sequence of elements. An aggregate operation is a higher-order function that receives a behaviour in a form of a function or lambda, and that behaviour is what gets applied to our sequence. For example, if we define the following stream: collection.stream() .map(element -> decorateElement(element)) .collect(toList()) In this case the behaviour what we’re applying to each element is the one specified in our “decorateElement” method, which will supposedly be creating a new “enhanced” or “decorated” element based on the existing element. Java Streams are built around its main interface, the Stream interface, which was released in JDK 8. Let’s go into a bit more of detail briefly! Characteristics of a stream As it was mentioned in my last article, Java Streams have these main characteristics: Declarative paradigm Streams are written specifying what has to be done, but not how. Streams are written specifying what has to be done, but not how. Lazily evaluated This basically means that until we call a terminal operation, our stream won’t be doing anything, we will just have declared what our pipeline will be doing. This basically means that until we call a terminal operation, our stream won’t be doing anything, we will just have declared what our pipeline will be doing. It can be consumed only once Once we call a terminal operation, a new stream would have to be generated in order to apply the same series of aggregate operations. Once we call a terminal operation, a new stream would have to be generated in order to apply the same series of aggregate operations. Can be parallelised Java Streams are sequential by default, but they can be very easily parallelised. We should see Java Streams as a series of connected pipes, where in each pipe our data gets processed differently; this concept is very similar to UNIX pipes! Phases of a stream A Java Stream is composed by three main phases: Split Data is collected from a collection, a channel or a generator function for example. In this step we convert a datasource to a Stream in order to process our data, we usually call it stream source . Data is collected from a collection, a channel or a generator function for example. In this step we convert a datasource to a Stream in order to process our data, we usually call it . Apply Every operation in the pipeline is applied to each element in the sequence. Operations in this phase are called intermediate operations . Every operation in the pipeline is applied to each element in the sequence. Operations in this phase are called . Combine Completion with a terminal operation where the stream gets materialised. Please remember that when defining a stream we just declare the steps to follow in our pipeline of operations, they won’t get executed until we call our terminal operation. There are two interfaces in Java which are very important for the SPLIT and COMBINE phases; these interfaces are Spliterator and Collector. The Spliterator interface allows two behaviours that are quite important in the split phase: iterating and the potential splitting of elements. The first of these aspects is quite obvious, we’ll always want to iterate through our data source; what about splitting? Splitting will take a big importance when running parallel streams, as it’ll be the one responsible for splitting the stream to give an independent “piece of work” to each thread. Spliterator provides two methods for accessing elements: boolean tryAdvance(Consumer<? super T> action); void forEachRemaining(Consumer<? super T> action); And one method for splitting our stream source: Spliterator<T> trySplit(); Since JDK 8, a spliterator method has been included in every collection, so Java Streams use the Spliterator internally to iterate through the elements of a Stream. Java provides implementations of the Spliterator interface, but you can provide your own implementation of Spliterator if for whatever reason you need it. Java provides a set of collectors in Collectors class, but you could also do the same with Collector interface if you needed a custom Collector to combine your resulting elements in a different way! Let’s see now how a Stream pipeline works internally and why is this important. Stream internals Java Streams operations are stored internally using a LinkedList structure and in its internal storage structure, each stage gets assigned a bitmap that follows this structure: So basically we could imagine this representation as for example: Why is this so important? Because what this bitmap representation allows Java is to do stream optimisations. Each operation will clear, set or preserve different flags; this is quite important because this means that each stage knows what effects causes itself in these flags and this will be used to make the optimisations. For example, map will clear SORTED and DISTINCT bits because data may have changed; however it will always preserve SIZED flag, as the size of the stream will never be modified using map. Does that make sense? Let’s look at another example to clarify things further; for example, filter will clear SIZED flag because size of the stream may have changed, but it’ll always preserve SORTED and DISTINCT flags because filter will never modify the structure of the data. Is that clear enough? So how does the Stream use these flags for its own benefit? Remember that operations are structured in a LinkedList? So each operation combines the flags from the previous stage with its own flags, generating a new set of flags. Based on this, we will be able to omit some stages in many cases! Let’s take a look at an example: In this example we are creating a Set of String, which will always contain unique elements. Later on in our Stream we make use of distinct to get unique elements from our Stream; Set already guarantees unique elements, so our Stream will be able to cleverly skip that stage making use of the flags we’ve explained above. That’s brilliant, right? We’ve learned that Java Streams are able to make transparent optimisations to our Streams thanks to the way they’re structured internally, let’s look now at how do they get executed! Execution We already know that a Stream is lazily executed, so when a terminal operation gets executed what happens is that the Stream selects an execution plan. There are two main scenarios in the execution of a Java Stream: when all stages are stateless and when NOT all stages are stateless. To be able to understand this we need to know what stateless and stateful operations are: Stateless operations A stateless operation doesn’t need to know about any other element to be able to emit a result. Examples of stateless operations are: filter, map or flatMap. A stateless operation doesn’t need to know about any other element to be able to emit a result. Examples of stateless operations are: filter, map or flatMap. Stateful operations On the contrary, stateful operations need to know about all the elements before emitting a result. Examples of stateful operations are: sorted, limit or distinct. What’s the difference in these situations then? Well, if all operations are stateless then the Stream can be processed in one go. On the other hand, if it contains stateful operations, the pipeline is divided into sections using the stateful operations as delimiters. Let’s take a look at a simple stateless pipeline first! Execution of stateless pipelines We tend to think that Java Streams will be executed exactly in the same order as we write them; that’s not correct, let’s see why. Let’s consider the following scenario, where we have been asked to give a list with those employees with salaries below $80,000 and update their salaries with a 5% increase. The stream responsible for doing that would be the one shown below: How do you think it’ll be executed? We’d normally think that the collection gets filtered first, then we create a new collection including the employees with their updated salaries and finally we’d collect the result, right? Something like this: Unfortunately, that’s not actually how Java Streams get executed; to prove that, we’re going to add logs for each step in our stream just expanding the lambda expressions: If our initial reasoning was correct we should be seeing the following: Filtering employee John Smith Filtering employee Susan Johnson Filtering employee Erik Taylor Filtering employee Zack Anderson Filtering employee Sarah Lewis Mapping employee John Smith Mapping employee Susan Johnson Mapping employee Erik Taylor Mapping employee Zack Anderson We’d expect to see each element going through the filter first and then, as one of the employees has a salary higher than $80,000, we’d expect four elements to be mapped to a new employee with an updated salary. Let’s see what actually happens when we run our code: Filtering employee John Smith Mapping employee John Smith Filtering employee Susan Johnson Mapping employee Susan Johnson Filtering employee Erik Taylor Mapping employee Erik Taylor Filtering employee Zack Anderson Mapping employee Zack Anderson Filtering employee Sarah Lewis Hmmm, that’s not what you were expecting, right? So actually how Java Streams are processed is more like this: That’s quite surprising, right? In reality the elements of a Stream get processed individually and then they finally get collected. This is VERY IMPORTANT for the well functioning and the efficiency of Java Streams! Why? First of all, parallel processing is very safe and straightforward by following this type of processing, that’s why we can convert a stream to a parallel stream very easily! Another big benefit of doing this is something called short-circuiting terminal operations. We’ll take a brief look at them later! Execution of pipelines with stateful operations As we mentioned earlier, the main difference when we have stateful operations is that a stateful operation needs to know about all the elements before emitting a result. So what happens is that a stateful operation buffers all the elements until it reaches the last element and then it emits a result. That means that our pipeline gets divided into two sections! Let’s modify our example shown in the last section to include a stateful operation in the middle of the two existing stages; we’ll use sorted to prove how Stream execution works. Please notice that in order to use sorted method with no arguments, Employee class has now to implement Comparable interface. How do you think this will be executed? Will it be the same as our previous example with stateless operations? Let’s run it and see what happens. Filtering employee John Smith Filtering employee Susan Johnson Filtering employee Erik Taylor Filtering employee Zack Anderson Filtering employee Sarah Lewis Mapping employee Erik Taylor Mapping employee John Smith Mapping employee Susan Johnson Mapping employee Zack Anderson Surprise! The order of execution of the stages has changed! Why is that? As we explained earlier, when we use a stateful operation our pipeline gets divided into two sections. That’s exactly what has happened! The sorted method cannot emit a result until all the elements have been filtered, so it buffers them before emitting any result to the next stage (map). This is a clear example of how the execution plan changes completely depending on the type of operation; this is done in a way that is totally transparent to us. Execution of parallel streams We can execute parallel streams very easily by using parallelStream or parallel. So how does it work internally? It’s actually pretty simple. Java uses trySplit method to try splitting the collection in chunks that could be processed by different threads. In terms of the execution plan, it works very similarly, with one main difference. Instead of having one single set of linked operations, we have multiple copies of it and each thread applies these operations to the chunk of elements that it’s responsible for; once completed all the results produced by each thread get merged to produce one single and final result! The best thing is that Java Streams do this transparently for us! That’s great, isn’t it? One last thing to know about parallel streams is that Java assigns each chunk of work to a thread in the common ForkJoinPool, in the same way as CompletableFuture does. Now as promised, let’s take a brief look at short-circuiting terminal operations before we complete this section about how Streams work. Short-circuiting terminal operations Short-circuiting terminal operations are some kind of operations where we can “short-circuit” the stream as soon as we’ve found what we were looking for, even if it’s a parallel stream and multiple threads are doing some work. If we take a closer look at certain operations like: limit, findFirst, findAny, anyMatch, allMatch or noneMatch; we’ll see that we don’t want to process the whole collection to get to a final result. Ideally we’d want to interrupt the processing of the stream and return a result as soon as we find it. That’s easily achieved in the way Java Streams get processed; elements get processed individually, so for example if we are processing a noneMatch terminal operation, we’ll finish the processing as soon as one element matches the criteria. I hope this make sense! One interesting fact to mention in terms of execution is that for short-circuiting operations the tryAdvance method in Spliterator is called; however, for non short-circuiting operations the method called would be forEachRemaining. That’s it from me! I hope now you have a good understanding of how Java Streams work and that this helps you design stream pipelines easier! If you need to improve your understanding of Java Streams and functional programming in Java, I’d recommend that you read “Functional Programming in Java: Harnessing the Power Of Java 8 Lambda Expressions”; you can buy it on Amazon in the following link. Conclusion Java Streams have been a massive improvement in Java language; not only our code is more readable and easier to follow, but also less error-prone and more fluent to write. Having to write complex loops and deal with variables just to iterate collections wasn’t the most efficient way of doing things in Java. However, I think the main benefit is how Java Streams have enabled a way to do concurrent programming for anyone! You don’t need to be an expert in concurrency to write concurrent code anymore; although it’s good that you understand the internals to avoid possible issues. The way Java processes streams in a clever way has cleared our paths to process collections and write concurrent programs and we should take advantage of it! I hope that what we’ve gone through in this article has been clear enough to help you have a good understanding about Java Streams. I also hope that you’ve enjoyed this reading as much as I enjoy writing to share this with you guys! In the next article we’ll be showing many examples on how to use Java Streams so if you’ve liked this article please subscribe/follow in order to be notified when a new article gets published. It was a pleasure having you and I hope I see you again!
https://medium.com/swlh/understanding-java-streams-e0f2df12441f
['The Bored Dev']
2020-07-21 10:52:24.409000+00:00
['Functional Programming', 'Java', 'Technology', 'Programming', 'Software Development']
964
Let Them Wait
“LET THEM WAIT!” — King Jaffe Joffer Texts, emails, calendar reminder, phone calls, someone liked your Facebook post, new Instagram follower, Linkedin request, new stories on Snapchat, breaking news, new music on SoundCloud, SportsCenter top plays, iOS/Android update, new video from your favorite YouTube channel, tweets you missed, must-listen podcasts, Venmo request, rate your Uber driver, free Postmates delivery promo. You are drowning in fucking notifications at all times. How are you supposed to deal with this? Option 1: Stop what you’re doing every 3 minutes to check the notification on your phone. Option 2: LET THEM WAIT! How are you going to get anything deeply meaningful done without focusing for an extended period of time, uninterrupted? When your phone goes off, ask yourself: Is this an emergency? If it’s not, proceed to the next question. 2. What will happen if I respond to this in an hour? Nothing will happen. Don’t be a slave to your notifications.
https://medium.com/profoundaf/let-them-wait-3fce3122dbe3
['Jeremy Musighi']
2018-08-15 17:37:19.727000+00:00
['Technology', 'Social Media', 'Productivity', 'Personal Development', 'Business']
965
Distributed Ledger vs Blockchain Technology: Do You Know the Difference?
Distributed Ledger vs Blockchain Technology: Do You Know the Difference? Blockchain is increasing in popularity because of bitcoin and other cryptocurrencies. Many traditional centralized bodies such as governments and banks are starting to take an interest in blockchain technology. A new term that is starting to make waves in the cryptocurrency space is the distributed ledger technology. However, many people usually confuse distributed ledgers with blockchain and vice versa. In this article, we will highlight everything you need to know about distributed ledger vs blockchain. What is a Distributed Ledger? A distributed ledger is a database that can be found across several locations or among multiple participants. However, most companies still use a centralized database with a fixed location. Unlike a centralized database, a distributed ledger is decentralized, which helps to remove the need for a central authority or intermediary for processing, validating, or authenticating transactions. Furthermore, these records will only be stored in the ledger after the parties involved have reached a consensus. What is Blockchain? A blockchain is a form of distributed ledger that has a specific technological underpinning. Blockchain creates an unchangeable ledger of records maintained by a decentralized network after a consensus approves all the records. The significant difference between blockchain and DLT is the cryptographic signing and linking groups of records in the ledger that forms a chain. Furthermore, there is a chance for the public and users to determine how a blockchain is structured and run based on the specific application of blockchain. What is the Difference Between Distributed Ledger and Blockchain Technology? Although both blockchain and distributed ledger sounds similar, there are some differences between the two. Blockchain can be categorized as a type of distributed ledger, but you cannot classify every distributed ledger as a blockchain. We have listed some of the unique aspects of blockchain and distributed ledgers to help you better understand the DLT vs blockchain technology comparison. 1. Block Structure The first difference between blockchain and distributed ledger technology is the structure. A blockchain usually comprises blocks of data. However, this is not the original data structure of distributed ledgers. This is because a distributed ledger is just a database that is spread across several nodes. But you can represent this data in numerous ways in each ledger. 2. Sequence All the blocks in blockchain technology are in a particular sequence. However, a distributed ledger does not need a specific data sequence. 3. Proof of Work In most cases, blockchains usually use the proof of work mechanism. However, there are other mechanisms, but they typically take up power. Distributed ledger, on the other hand, does not need this type of consensus, which makes them more scalable. Blockchain is just a subset of distributed ledgers, and it has additional functionality aside from the traditional DLTs scope. Proof of work adds a significant difference between distributed ledger vs blockchain. 4. Real-Life Implementations Implementation is an essential point to consider when understanding the differences between blockchain and distributed ledger. Blockchain has many implementations in real life as it is more popular, and many usages are developed in due course of time. Since a lot of enterprises are adopting the blockchain nature and are slowly integrating it into their systems, you will also find big giants like Amazon, IBM, etc., that offer good blockchain as a service solution. In comparison, developers recently started to dive deep into the distributed ledger technology core. Although there are several types of DLTs in the tech world, there are few real-life implementations. However, they are still being developed, and we will start to see real-life implementations very soon. 5. Tokens There is no need for tokens or any currency in a distributed ledger technology. However, you may need tokens to block and detect spam. Anyone can run a node in blockchain technology. However, running a full node requires a considerable network that may be difficult to manage. Furthermore, there is usually some token economy, and it takes a fundamental role in blockchain technology. However, modern blockchain technology is looking for a way to leave the cryptocurrency shadow. Blockchain vs Distributed Ledger Comparison Table Here’s a quick comparison showcasing the difference between distributed ledgers and blockchain technology, Advantages of Using a Distributed Ledger like Blockchain Using blockchain technology offers a secure and efficient way to create a tamper-proof log of sensitive activity. Blockchain has the potential to give an organization a safe and digital alternative to banking processes. We can use distributed ledgers like blockchain for financial transactions as they help reduce operational inefficiencies and save money. Since distributed ledgers like blockchains are decentralized in nature and the ledgers are immutable, they offer greater security to the organization. Distributed Ledger Technology Beyond Blockchain Although the popularly known distributed ledger technology is blockchain, the distributed ledger technology future will depend on the collaborative effort of the two technologies. According to James Wallis, the Vice President of Blockchain Markets and Engagements for IBM, the uses of DLT will be greater than what we can think of today, but it will require a level of sharing that does exist before. Furthermore, if DLTs become standard, they can easily revolutionize the Know Your Customer (KYC) process. For those who don’t know, KYC is a business process to identify and verify its clients' identities. It will then help make broader identity management much more straightforward. You may also like,
https://medium.com/brandlitic/difference-between-distributed-ledger-and-blockchain-vs-dlt-7969f3837ded
['Amarpreet Singh']
2021-06-01 14:25:56.360000+00:00
['Blockchain', 'Distributed Ledgers', 'Blockchain Technology', 'Bitcoin', 'Cryptocurrency']
966
JavaScript and the Web — Keyboard, Mouse, and Touch Events
Photo by Claudel Rheault on Unsplash JavaScript is one of the most popular programming languages in the world. To use it effectively, we’ve to know about the basics of it. In this article, we’ll look at how we can handle keyboard, mouse, and touch events in JavaScript. Key Events The keydown event is trigger when we press a key, keyup event when a key is released. For instance, if we have the following HTML: <p> foo </p> We can write: window.addEventListener("keydown", event => { if (event.key === "g") { document.body.style.background = "green"; } }); window.addEventListener("keyup", event => { if (event.key === "g") { document.body.style.background = ""; } }); Then when we press the G key, the background will turn green. It’ll turn back to white when we release it. The key property has the key that’s pressed. The name of the key corresponds to what it is on the keyword. keydown fires when the keypress and held. The event fires again every time the key repeats. Modifier key presses by be checked by the shiftKey , ctrlKey , altKey , and metaKey properties. Meta key is the Windows key on Windows keyboards and the Command key on Mac keyboards. To check for key combinations, we can write: window.addEventListener("keydown", event => { if (event.key === "g" && event.altKey) { document.body.style.background = "green"; } }); window.addEventListener("keyup", event => { if (event.key === "g" & event.altKey) { document.body.style.background = ""; } }); So the screen turns green when we press Alt and G at the same type. It turns back to white when we release them. When a user is typing text, we shouldn’t use the keyboard events to pick up what’s been typed. This is because many people use input method editors that change the keyboard to type things other than the English language. We’ll get those wrong if we get the inputted values from keyboard events. Pointer Events We can get mouse pointer-events to do mouse clicks. In addition to mouse clicks, they also include touchpads, touchscreens, and trackball events. Mouse Clicks We can use the mousedown and mouseup events to get mouse events. mousedown listeners run when a mouse button is clicked. mouseup listeners run when a mouse button is released. If 2 clicks happen close together, we can use the dblclick event to listen to it. To get the position of where a mouse event happened, we can look at its clientX and clientY properties. They contain the coordinates in pixels relative to the top-left corner of the window. pageX and pageY are the coordinates relative to the top-left corner of the whole document, which may be different when the window has been scrolled. For instance, we can write: window.addEventListener("click", event => { const div = document.createElement("div"); div.innerText = `(${event.pageX}, ${event.pageY})` document.body.appendChild(div); }); Then we we click the browser window, we’ll see the coordinates of the clicks listed on the screen. Mouse Motion We can listen to mouse motions with the mousemove event. It can be used to track the position of the mouse. For instance, we can write: window.addEventListener("mousemove", event => { document.body.innerText = `(${event.pageX}, ${event.pageY})` }); Now when we move the mouse within the browser window, we get the position logged. Touch Events We can listen to touch events with mouse events. click corresponds to touches on the screen. There are also additional touch-only events. They include touchstart that’s fired when our finger starts touching the screen. touchmove when we move while touching. touchend is fired when we stop touching the screen. Many touchscreens can detect multiple fingers at a time, so there are no set coordinates to the touches. The event object has a touches property, which is an array-like object of points of the touch coordinates. Scroll Events A scroll event is fired whenever we scroll through a page. For instance, if we have a page of stuff on a page created dynamically: document.body.appendChild(document.createTextNode( "foo ".repeat(1000))); window.addEventListener("scroll", () => { const scrollHeight = document.body.scrollHeight; console.log(scrollHeight); }); Then when we scroll through the page, we’ll see the scroll height of the page. There’s also an innerHeight global variable that gives us the height of the window. Calling preventDefault on a scroll, the event doesn’t prevent the scrolling from happening. The event handler is called only after scrolling takes place. Conclusion We can handle mouse clicks and scroll events within a page with JavaScript. Also, touch events can also be handled.
https://medium.com/javascript-dots/javascript-and-the-web-keyboard-mouse-and-touch-events-c84e1ba97e0f
['John Au-Yeung']
2020-06-19 15:20:39.838000+00:00
['Technology', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
967
Exciting times — We’re in the middle of a revolution — The invention of Digital Asset
After years of experience in software engineering industry, especially in debugging and reverse-engineering I’ve had to sharpen a specific skill, a skill that detectives must use in a crime scene; the ability to filter the noise and notice the odd. Essentially, investigating a program crash is like investigating a crime scene, you look for clues and try to figure the motives. This skill comes handy when I observe what’s happening in the world right now. The rush for crypto-gold. People rush in with their cash in one hand and dreams of exceeded sudden wealth in the other. I don’t want to be the party-popper, but you’ve fallen into the hands of a group of people that outsmarted you. And surprising as it may sound, but as soon as you’ll join this vampire club you’ll become just one of them. You’ll soon understand that the only way to get out of this game is to bring someone else in. If you’re lucky enough you’ll get out with your money and maybe with some profit, and if you’re less lucky, you money will eventually be forever gone because someone already took it as his profit. Those are simply the unwritten rules of this game, want it or not. What I want to point out in this article is that this game is only a delusion and a distraction of what’s really going on. Bitcoin, Altcoins, bubble, regulations, acceptance, what can you really do with crypto-currency and so no. All those rumbling are the noise. If you’ll let yourself loosen a bit of your stronghold believes and what you’ve taken as a granted, you’ll be able to see it for yourself. Bitcoin is an application, just like the Calculator app on your phone. Blockchain is the amazing device that was sophistically built to make this application possible and tangible. The invention of this piece of technology, Blockchain, is what enabled the humanity, in the first time of human history, to define and use a Digital Asset. Until this point in time it was not possible to possess a digital asset in a safe and secure way that you could say confidently “I’m the one and only possessor of this asset”. Blockchain enables this. This brilliant piece of work combines multiple paradigms and doctrines including computer science, economics and game theory. Mastered in a way that is tamper safe and synchronises multiple parties with incentives to keep their rules. That allows to uniquely define an asset and it’s single owner. You cannot copy a digital assent (nope, ctrl-C, ctrl-V won’t work). You cannot forge or tamper a digital asset. You don’t need an upper authority to prove it’s yours, we all know it is yours. No one, including your government, could ever forcley take it away from you. This is what Blockchain is all about. This digital-asset, which is essentially an unpredictable unique string composed of special combination of alphanumerics, protected by sophisticated set of rules, can represent anything starting from “nothing”, a penny, a stokehold, a voting right, a house, to even a person. What can you do with it? Obviously the first and most known implementation is Bitcoin. But you can construct anything that is related to some sort of possession on top of it. This revolution is the entrance to a new digital era that now enables us to transcend money, contracts, ownership and so on, to this decentralised, tamper proof database. We are heading to an even more digitalised and electronically-coupled future, which sometimes might seem scary but unleashes endless possibilities. We almost missed it if not for those greedy people that thought to put their money in Bitcoin and started to pump it up and talk about it, and essentially made some noise. Without them that platform wouldn’t been put in actual use, no one would talk about it and it could end up buried in the open source graveyard.
https://medium.com/livshitz/exciting-times-were-in-the-middle-of-a-revolution-the-invention-of-digital-asset-3d083fc1c1fd
['Elya Livshitz']
2018-06-26 14:16:40.952000+00:00
['Technology', 'Revolutionary', 'Blockchain', 'Cryptocurrency', 'Bitcoin']
968
DroneBase Now Integrates Into the Esri ArcGIS Platform
We’re excited to announce that Esri customers will now be able to request drone flights from DroneBase through our integration with Esri’s mapping and analytics platform, ArcGIS. This marks the first time that Esri users will be able to order a drone flight directly within the platform. By selecting DroneBase, Esri users can expect a scalable and efficient way to access drone image and data collection from our reliable Pilot Network. All images captured by our drone pilots will be published to the ArcGIS platform, making it easy for users to utilize the images, and access the mapping and analytics resources available through the platform. From Request to Completion What will this process look like for an Esri user who is new to DroneBase? Here is a step-by-step guide, from requesting a Mission to accessing all data collected: Visit the DroneBase app listing on the ArcGIS marketplace and click “Contact Provider” to get started on creating a DroneBase account and linking it to your ArcGIS account. Follow the instructions in the onboarding email sent to you from DroneBase to complete linking your accounts. Request a drone flight through the DroneBase platform using our intuitive order flow. A vetted and experienced local DroneBase pilot is dispatched to collect the desired images within 48 to 72 hours. Once the Mission is complete, the imagery is uploaded and automatically published to ArcGIS as a map layer. You can then easily access the imagery or upload to one of the analytics tools ArcGIS has to offer. If you’re an existing DroneBase customer AND Esri user, you can now easily connect your two existing accounts: Open your DroneBase Client Dashboard and click the Integrations tab in the left sidebar. Click the button that says “CONNECT YOUR ArcGIS ACCOUNT”. Log in to your ArcGIS account. That’s it! Now you can use your maps and data to place orders, and view your assets directly in your ArcGIS account. Seamless Integration Aerial imagery and data captured by DroneBase pilots can be easily used by any organization within the ArcGIS platform for any purpose that may benefit from drone imagery, including visual inspections, site monitoring, asset management, and situational awareness. By utilizing DroneBase’s pilot network, users will have access to fast collection of drone imagery, and then leverage any number of the apps available on ArcGIS to analyze the imagery and gain greater insights. This means architecture, engineering, and construction organizations can monitor site construction more easily and cost-effectively assess the integrity of their assets or the extent of possible damage all in one place. Together with Esri, we will provide customers with our extensive pilot network while utilizing the benefits of the ArcGIS platform. With the extensive resources now available to customers, DroneBase will continue to serve even more professional industries and individuals. Visit our ArcGIS Marketplace listing page here and get started today!
https://medium.com/@dronebase/dronebase-now-integrates-into-the-esri-arcgis-platform-2f6a5a2b0459
[]
2020-03-17 20:54:44.412000+00:00
['Startup', 'Analytics', 'GIS', 'Technology', 'Drones']
969
Tesla vs Others — Eclectic Past and Electric Future
Predicting future is hard but creating future is what makes a company great Can Tesla Inc. surpass Apple Inc. to become world’s most valuable company? Luminar founder Austin Russell has become the newest and youngest billionaire in the world. A company which sells automotive component known as ‘ LIDAR’. A high-priced ‘component’ which promises to achieve full autonomous driving capability. Volvo made a investment in the company in the year 2018 which is now valued at more than $8 Billion dollar. Luminar’s product are being sought after by nearly every big auto maker in the world but Tesla. Tesla, the company, do not basically believe LIDAR as a technology (as a breakthrough sensor for Level 5 autonomous driving). They find it ‘expensive’ and ‘unnecessary’. In Elon Musk’s own word: anyone relying on LIDAR is doomed. Doomed. Expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices… you’ll see. This is interesting. A company which is yet to find its competitor has charted its own path which is far away from the market direction itself. What makes them so confident? Why it is always Tesla vs Others? Tesla — the pathfinder Tesla (Motors) was founded in July 2003 and 17 years later the company is one of the most trending (and may be infamous) stock on the Nasdaq. Moved from $80 to $600+* — that’s a massive rally (For Year 2020 — till 11th Dec. 2020). The company founder “Elon Musk” himself thinks company is over valued however always toiling to keep the production running and deliveries going. Tesla (real Elon Musk) knows that path to profitability cannot be selling cars but unbundling the ecosystem. It needs to make money from all the accepts and hence currently it leads the pack for a company which doesn’t aim to solve transportation but other consumer markets too. Here’s some potential synergies: HVAC (Heating, Ventilation and Air Condition) (B2C and B2B) Full Autonomy Driving Software Package (B2B) Batteries (B2C and B2B) Solar Roof through SolarCity (B2C and B2B) 2 Wheeler (Scooter and Bikes) Finance / Auto Loan (B2C) Auto Insurance (B2C) Charging Network — Pay per use (B2C and B2B) Car Subscription (B2C) IP Licensing (B2B) Manufacturing is hard, and capital intensive. If Tesla needs to accelerate the ecosystem they need to share their power-train and battery technology in exchange of manufacturing scale-up. For emerging markets, none of current model would make sense. Emerging market expects pricing to be around $15,000 for a decent car which is less than half of its cheapest model (Model Y). Unless Tesla forge deep partnership in exchange of technology scale-up will be harder. Large economies have many early adopters who are attracted to mid to high premium cars but it is also susceptible to establish brands who are looking to launch lucrative models in the market. They might be 3–5 year behind Tesla but they know how to make cars for decades and few companies for a century. Their distribution and production capacity alone will pose problems to scale. What made Tesla the leader Tesla, for that matter has nailed the branding and placement of its brand among enthusiastic car buyers. Elon knows how to market products with zero ad spends. Twitter alone for him does the job let alone Product Launch and savvy media outlets tracking him. Tesla is killing NPS scores — Youtube, Podcast, Twitter they are everywhere. However it should not let its customers down who wait for deliveries patiently. Tesla’s performance and charging network is doing wonders mixed with its early entrant advantage in the market. Automotive industry is ruled by behemoth and hence constant innovation on products (read new launches in popular consumer segment) is the key. From Roadster to Model Y and from Semi to Tesla Cyber-truck keeps the engagement level of market high. Tesla’s success is also partly funded by slow reaction by world’s leading automakers which is surprising. They are still trying to understand the market. Electric and sustainable energy production is the future. Oil is heavy for emerging economies and its repercussions are far and wide. The future is in front of us and people who are slow to react will pay heavily licensing a IP from market leader itself. Tesla’s momentum and fan base only makes them market leader. Competition who may catch-up Electric vehicle will be a commodity in next 7–10 years. Penetration for public transport will convince people to buy electric and it will further accelerate if Governments can fund with tax waiver and subsidies. However, this will make sure bigger automaker will be forced to launch vehicles — from BMW, Volvo to Toyota and Honda. But the real competition of Tesla could be Chinese automakers. They can replicate any technology faster than anyone else in the world. Don’t think about Rivian too much(could be great but targeted to a niche premium segment) but think about NIO, XPENG and BYD. While US and China relationship could dent Chinese companies to launch and service their vehicles in the US coupled with Anti-china sentiment after the COVID-19 pandemic but those are outcome of large Geo-politics which may go away in 3–5 year timeline. This will leave the market wide open for disruption without any external restriction to provide cushion to certain automakers. Bonus tip: Software companies will also make a killing in autonomous driving spree. Google, Apple and Amazon — and other big Tech will jump this sector to sell services while you own a car. Tesla need to understand and unbundle the market. Services make more money then the product itself. Companies would take a note from companies like Gillette and Apple. Sell a product and then sell the entire ecosystem — over and over again. The key to monetization requires them to follow the customer. Future is now? Whomsoever the market belongs to but ultimate beneficiary would be consumers and our planet. While disposing batteries and solar cells would be certainly a nightmare. Instead of smaller countries become the dumping yard — future should belong to recycling and sharing economy not just manufacturing scale-up.
https://medium.com/@stratup/tesla-vs-others-eclectic-past-and-electric-future-4714e18e83e2
[]
2021-01-01 19:31:18.755000+00:00
['Self Driving Cars', 'Sustainability', 'Elon Musk', 'Technology', 'Tesla']
970
Opening up to a connected world: Technology as the driving force
Web Summit Main Stage (Photo courtesy of Web Summit Facebook) It was my first time in Europe! My nerves quickly turned to excitement as I boarded my flight: destination — Web Summit 2016 in Lisbon, Portugal. On arrival, the Web Summit presence was loud and clear at the airport where attendees and speakers checked in before continuing into the city. The first night kicked off with Roberto Azevedo (Director-General of the World Trade Organization), Jose Manuale Barrosp (former president of the European Commission), and Tom Nuttall (columnist for The Economist) discussing the uncertainty our world faces amidst economic, political, and societal shifts, and how technology is the true underpinning factor. While change can be exciting and can bring about a more connected world, not everyone is reacting in the same way. Instead of being players in a more connected world, some are siding with protectionism in defense of their traditional statehood, human capital and, ultimately, their markets. “Unfair trade” is often touted for the hesitancy and reluctance to fully open up to a global economy, but what’s at the heart of the issue is the rate of technological innovation and the ability to adapt. The effort required to transform societies to accommodate the new wave of technology is monumental. The entire spectrum of education, public training, financing, transportation, healthcare, and customer service will have to scale accordingly. We’ve already heard about the likes of self-delivering cargo trucks and drones — imagine if this industry takes off. Not only will there be ramifications for the transporting industry, but support industries such as side-of-the-road assistance, roadside restaurants, and hotels will be affected as well, which will impact the working class, which will deplete the jobs in this sector…you see where I’m going. Photo by Benjamin Stooke We’re all familiar with the story of automation replacing human labor, but this time around the scale and speed of change will be ramped up. The answer, according to a conference panelist, is preparation. As these changes unfold, a trickle-down effect will occur and impact us at a granular level. Undoubtedly, new legislation will be needed, and a tremendous amount of labor force training will have to take place as some jobs become obsolete and new ones arise. Think along the lines of zoning policy for drones, insurance and liability issues for at-home robots, new school curricula, repair shops for robots. Indeed, how we prepare will be indicative of how smoothly we transition into our future society. I’ll continue to unfold my findings from the Web Summit as we wrap up 2016. My next piece will cover How the Integration of AI Into the Workplace Will Power Positive Health Outcomes.
https://medium.com/healthwellnext/opening-up-to-a-connected-world-technology-as-the-driving-force-ae0bfb977257
['Carl Frederique']
2016-12-27 20:14:31.534000+00:00
['Tech', 'Startup', 'Future', 'AI', 'Technology']
971
Insurtech Europe powered by Plug and Play: Batch 3 EXPO Day
Munich, Germany, 20 February 2020 — On February 5th, 2020, 14 Insurtech startups pitched during Insurtech Europe’s Batch 3 EXPO Day held at Plug and Play Munich. The audience consisted of over 200 members of the Insurtech ecosystem including corporations, investors, and startups from Germany, France, Israel, Switzerland, Singapore, Spain, the Netherlands, the United Kingdom, and the United States. Insurtech Europe EXPO is the culmination of the three-month program and features a dynamic morning of pitches from startups, where founders share their unique solutions and major milestones reached since joining the accelerator program. Amongst others, one success story was shared by EasySend, who announced today that they were able to negotiate pilot projects in the last three months as an outcome of their participation in our Insurtech Europe program. Featured speakers included René Schoenauer, Director, Product Marketing, EMEA at Guidewire, Arno Wilke, General Manager at Hannover Re Life & Health, and Greg Yoder, Head of Business Development at Sureify, who shared their viewpoints on corporate innovation and collaboration within the insurance space. “It’s very interesting for us to have one event where we can meet insurance companies but also meet a lot of insurtech startups because it is my core belief and also Guidewire’s core belief that a lot of innovation is going to come out of the insurtech space in the future,” said René Schoenauer. During the EXPO Day, Plug and Play’s program “Enterprise Tech” was showcased for the first time in Europe with seven startups from the United States, Poland, Germany, Italy, and Switzerland. “We are the only horizontal program within Plug and Play, applicable to all industries, which creates a more inclusive service for our partners. Our focus areas are Cybersecurity, Finance, Operations & Legal, Growth & Customer Experience, Infrastructure & I.T., Human Resources & Talent, and Big Data & Analytics,” said Nate Hinman, Director of Enterprise Tech at Plug and Play. Plug and Play’s famous Innovation Award for the most innovative members of the ecosystem was handed this time to Versicherungskammer (VKB) due to their engagement and dedication collaborating with startups throughout the Batch 3 program. “Through our partnership with Plug and Play, we gain access to international innovation and evaluate concrete business opportunities. The Plug and Play Insurtech team shares our mindset and supports us in bringing our business to the next level,” said Johannes Wagner, Head of Startup Cooperation at Versicherungskammer and Member of the Board at ITHM. Plug and Play’s People’s Choice Award was given to Fixico who received nearly 50% of the audience’s votes, “Thank you to Plug and Play for all the arrangements and interactions we had, we hope to keep collaborating in the future,” commented Derk Roodhuyzen de Vries, Founder & CEO. “It is exciting to see how much traction our program and platform receives. We are happy to welcome two new partners, Achmea and Aflac and are looking forward to keeping this momentum by growing our platform even more in Batch 4 starting March 2020,” stated Dr. Xenia-Isabel Gioia Poppe, Director Insurtech Munich & Head of Programs, Insurtech Plug and Play. About Plug and Play Insurtech Established in 2016, Plug and Play Insurtech is one of Plug and Play’s largest industry-specific programs. Alongside its headquarters in Silicon Valley, this platform runs programs in five global locations including Beijing, Munich, New York, Singapore, and Tokyo. The program currently has over 100 corporate participants including Farmers Insurance, Nationwide, SOMPO Digital Lab, and Travelers, and has worked with hundreds of international Insurtech startups to date. For more information, visit http://plugandplaytechcenter.com/insurance/. Plug and Play Insurtech Contact: [email protected] About InsurTech Hub Munich Insurtech Hub Munich is one out of twelve digital hubs initiated by the German Federal Ministry of Economy, in cooperation with Insurtech startups, corporate partners, investors and research institutions, all based in Munich. Our goal is to determine the future of the insurance sector by providing a platform for the everyday exchange between all-important current and future players, building meaningful partnerships across industries and enabling alternative ideas and business models. For more information, visit https://www.insurtech-munich.com/. InsurTech Hub Munich Contact: [email protected]
https://medium.com/@PlugandPlay/insurtech-europe-powered-by-plug-and-play-batch-3-expo-day-8423f9f59d80
['Plug', 'Play Tech Center']
2020-02-20 09:09:49.195000+00:00
['Startup Ecosystem', 'Enterprise Technology', 'Open Innovation', 'Insurtech']
972
Radical Technologies — 6 Revolutions From The Dot Com Era
When we reflect on the many revolutions of the last 30 years, the access to resources that were formerly reserved for institutions, the elite and centralised powers has gradually been distributed to the people through various innovative technologies. Like the ocean, resources accumulate in mountainous centralised waves. With the help of technologies, the waves overflow and dissipate into wider expanses. Here we reflect on past technological eras and the teams that ushered in revolutions. 6. In 1984, the founding Apple team and a small advertising agency came together to create an Orwel-inspired Super Bowl advert to introduce a revolutionary personal computer for the everyday person. A ‘way of life’ technology that even changed icons into art, an act that revolted against elitist constraints of IBM and the likes. A technology that made the computer, personal, the Macintosh. Did you know that MP3 format was invented in the 1970s? But only brought to life in 1998 by South Korean company Saehan with the world’s first MP3 Player, MPMan. This inspired the heart of 19-year-old student, Shawn Fanning, to program his own community sharing software, he called it Napster. Napster was famously pursued for allowing its users to trade music and videos for free without paying royalties to production companies and artists. However, the software’s concept revolutionised and led a new media sharing era which manifested in platforms such as Spotify, ITunes, SoundCloud, BandCamp and more. Shawn later joined a startup company in the midst of a new social movement, a social movement that would revolutionise how we interact with each other today. That movement was Facebook. https://arubaito.io/wp-content/uploads/2021/09/mobile-mp3.mov 4. In the mid 90s, to make a simple email account was a daunting task that could only be done and used from a computer. But in San Francisco. A bored Apple employee, Sabeer Bhatia, left the tech giant and pursued an idea to create a browser-based email that was accessible from anywhere on any device around the world with access to the internet, he called it Hotmail. Hotmail, by today’s standards, was one of the internet’s first viral sensations spurred on by its ingenious ‘Get your free email at Hotmail” button at the bottom of every email sent, which is now today’s ‘Share’ button. Hotmail received over 100 thousand subscribers in the first few weeks and was later bought by Microsoft for $400 million. The Hotmail model is still copied today. 3. In the year 1440, religion went viral thanks to a technology that we seldom think as a technology, a book. A man from Germany, Johannes Gutenberg revolutionised the printing of words to paper with a new type of printing press. It differed from the Chinese and Korean ‘wood block printer’ because it used mechanics to press ink to moving paper, making it easy to print 20million volumes by 1500. This led to a widespread golden-age of knowledge for the first time in history. Resulting in access to written resources and religious text such as the bible that was followed by the mass adoption of the Martin Luther “95 Theses” and a protestant reformation. 2. Imagine a time when you had to send a cheque to pay for something online. Horrifying, yes. This was an era not too long ago, that was revolutionised by a virtual payments service that affected the way in which we buy things online to this day. That service is called PayPal. PayPal, originally named Confinity, was started in 1998 by Max Levchin, Peter Thiel, Luke Nosek and Ken Howrey.Elon Musk, a competitor at a virtual payments company X.com, joined and eventually became CEO. PayPal gave un-banked peoples in developed to developing nations the ability to access online global payments, bringing prosperity to many. 95 per cent of people in the UK are now shopping online and most of them have probably paid for their goods via PayPal. 1. If you have ever heard of the origin story of Facebook, then you surely would have heard of Cameron and Tyler Winklevoss. In popular media, the Winklevoss twins were portrayed as the privileged, blonde, athletic embodiment of the establishment in Mark Zuckerburg’s genius underdog narrative. But how much do you really know about the revolutionary Winklevoss twins? After the twins sought compensation in court for stolen intellectual property, the idea for Facebook, they eventually settled. Facebook went on to build a siloed empire that covertly extracts data from its users to turn a profit. Meanwhile, the Winklevoss twins pursued decentralisation of control and cut out gatekeepers through a platform they called Gemini. Gemini was named after the twin star sign and NASA’s moon-bound mission Project Gemini, evoking the cutting edge technology and the “number go up” culture of the platform’s main service, cryptocurrency. The Winklevoss twins say, “We’re all astronauts building on the frontier of money and the frontier of art and the frontier of finance. We feel like we’re on a spaceship, exploring a new frontier.” Gemini grants the everyday person the level of returns that they should have been receiving by saving money in the bank, increasing returns from 0.01% to 8.05%, had centralised banks not taken the majority of the profit earned from reinvesting your savings. Gemini’s exchange lets you buy and sell bitcoin and 40+ crypto assets. They even protect you from global inflation, political instability, theft, and unknown risks with their wallet app. Recognising that the future is becoming more digital, the Winklevoss twins are helping people tie their real-life assets to the digital world through ‘Non-Fungible Tokens’ or NFTs. The unprecedented empowerment of the individual and community stands before us. ‘Decentralised Technology’ through Blockchain & Cryptocurrency is a dot com shadowing era, that will radically transform and bring hope to the furthest nooks of our world. Will you join the next revolutionary team? Find a Career in Crypto, Join the Revolution Today. Register Free.
https://medium.com/@arubaito/radical-technologies-6-revolutions-from-the-dot-com-era-b4c82fe00250
[]
2021-09-13 16:32:34.326000+00:00
['Blockchain', 'Crypto Jobs', 'Technology', 'Cryptocurrency', 'Revolution']
973
BootstrapVue — Button Groups and Toolbars, and Calendar
Photo by Estée Janssens on Unsplash To make good looking Vue apps, we need to style our components. To make our lives easier, we can use components with styles built-in. In this article, we’ll look at how to add button groups and toolbars to group buttons. We also look at how to use the calendar to let users select dates. Button Group Button groups let us group a series of buttons together. To add a button group, we can use the b-button-group component. For instance, we can write: <template> <div id="app"> <b-button-group> <b-button>Button 1</b-button> <b-button>Button 2</b-button> </b-button-group> </div> </template> <script> export default { name: "App" }; </script> We have the b-button-group component with b-button components inside. Vertical Groups We can make the group vertical by adding the vertical prop: <template> <div id="app"> <b-button-group vertical> <b-button>Button 1</b-button> <b-button>Button 2</b-button> </b-button-group> </div> </template> <script> export default { name: "App" }; </script> Then we make the buttons arranged vertically. Dropdown Menu We can add dropdown menus inside our button group. For instance, we can write: <template> <div id="app"> <b-button-group> <b-button>Button 1</b-button> <b-dropdown right text="Menu"> <b-dropdown-item>foo</b-dropdown-item> <b-dropdown-divider></b-dropdown-divider> <b-dropdown-item>bar</b-dropdown-item> </b-dropdown> </b-button-group> </div> </template> <script> export default { name: "App" }; </script> We have a dropdown menu on the right of the button with the b-dropdown component. We have b-dropdown-itemn and b-dropdown-divider components inside. Button Toolbar A button toolbar is similar to a button group. It groups multiple buttons together. Also, we can use it to group multiple button groups together. We use the b-button-toolbar component to create a button toolbar. For instance, we can write: <template> <div id="app"> <b-button-toolbar> <b-button-group class="mx-1"> <b-button>apple</b-button> <b-button>orange</b-button> </b-button-group> <b-button-group class="mx-1"> <b-button>banana</b-button> <b-button>grape</b-button> </b-button-group> </b-button-toolbar> </div> </template> <script> export default { name: "App" }; </script> The mx-1 class make the groups spaced apart. We apply to the button groups to add a margin between them. We can also add a dropdown menu inside a button toolbar as we do with button groups. For instance, we can write: <template> <div id="app"> <b-button-toolbar> <b-button-group class="mx-1"> <b-button>apple</b-button> <b-button>orange</b-button> </b-button-group> <b-dropdown class="mx-1" right text="menu"> <b-dropdown-item>foo</b-dropdown-item> <b-dropdown-item>bar</b-dropdown-item> </b-dropdown> <b-button-group class="mx-1"> <b-button>banana</b-button> <b-button>grape</b-button> </b-button-group> </b-button-toolbar> </div> </template> <script> export default { name: "App" }; </script> We have the same classes to keep them apart. The text prop sets the text for the dropdown button. b-dropdown-item have the dropdown items. Calendar BootstrapVue has the b-calendar component to create a calendar. For instance, we can write: <template> <div id="app"> <b-calendar v-model="date" @context="onContext" locale="en-US"></b-calendar> </div> </template> <script> export default { name: "App", data() { return { date: new Date() }; }, methods: { onContext(e) { console.log(e); } } }; </script> We have the onContext method to get the selected date value when the calendar is clicked. date is the model, which we bind to the calendar with v-model . Disabled and Readonly State We can disable the calendar with the disabled prop. Likewise, we can add the readonly prop to make it read-only. The difference between them is that disabled removes all interactivity, but readonly will disable selecting a date, but keep the component interactive. So if we have: <b-calendar disabled locale="en-US"></b-calendar> then we disable the whole calendar. If we have <b-calendar readonly locale="en-US"></b-calendar> Then we can navigate through the calendar but can’t select anything. Date Constraints We can limit the date range that can be selected with the min and max props. For instance, we can write: <template> <div id="app"> <b-calendar v-model="date" :min="min" :max="max" locale="en-US"></b-calendar> </div> </template> <script> export default { name: "App", data() { return { date: new Date(), min: new Date(2020, 0, 1), max: new Date(2020, 11, 31) }; } }; </script> Then we can only select dates that are in the year 2020. Photo by Victor Malyushev on Unsplash Conclusion We can use the calendar component to let users select dates. The selectable date range can be limited. Also, we can group buttons with button groups and button toolbars.
https://medium.com/dataseries/bootstrapvue-button-groups-and-toolbars-and-calendar-bbe78f312fe9
['John Au-Yeung']
2020-06-29 08:32:16.388000+00:00
['Programming', 'Technology', 'JavaScript', 'Software Development', 'Web Development']
974
廿一世紀的職場轉變及如何應變
in Down the rabbit hole
https://medium.com/@cfadmin/%E5%BB%BF%E4%B8%80%E4%B8%96%E7%B4%80%E7%9A%84%E8%81%B7%E5%A0%B4%E8%BD%89%E8%AE%8A%E5%8F%8A%E5%A6%82%E4%BD%95%E6%87%89%E8%AE%8A-4a7d410c5abd
['Cocoon Foundation']
2020-12-18 02:08:51.393000+00:00
['21st Century Skills', 'Future Of Work', 'Technology', 'Youth', 'Entrepreneurship']
975
DSLR:5 Best Ways To Improve Creative Imagination And Creative Images
Creative Imagination For Creative Images: Creative imagination is the result of imagination beyond your actual imagination which gives birth to creative images. To see, feel, and listen to those things which could not be seen by a normal active imagination and necked eyes. This skill you have to develop by yourself. Creative imagination is the simple process of imagination and converting dreams into reality in the form of beautiful creative images by using your imagination power. Creative imagination is not only to imagine things but to imagine the original things and to convert them into real things that many people think about. So, to develop the new ideas and how do you cultivate this? You can convert actual imagination into an image with the help of DSLRs only. These DSLRs guide you and support you to click the images as per your imagination and to make them realistic. The start-up of your imagination begins with your thinking about the product and your imagination helps to convert it into the real picture after applying it or commanding it. This is a simple process but it takes a lot of time to make it realistic. It can take a lot of practice so it is an unpleasant activity. Another part to develop your imagination is to become more creative by regular thinking and practically working on it. Different surroundings can help you for encouraging your thought process and clicking the images with the help of the below cameras will convert your imagination into reality. Want more creativity in your photography then you can use different modes in DSLR and with the help of your imagination, you could click a fantastic image that would be the best image you have ever taken. You have to give more focus on daily practices by clicking different images from different points of view to capture the best image. It will develop your photography skill and also improves the imagination power towards the subject. Don’t Wait For Creative Imagination: Creative imagination gives birth to new inventions. 1.Change Your Perspectives For Creative Images: You can become the best photographer by changing your perspectives and enjoying them without time-bounding. You can imagine the world of photography through the vision of the painter for developing your imagination process. Click different kinds of images from the DSLR and always try to capture those images with new angles. For Example, A flower can be captured with various angles so the best photographer can able to click the best image in a single click after doing lots of practice on it. 2. Challenging Your Assumptions For Creative Images: Challenge all your assumptions for better practice. To accept the challenges for more creative photography. You can click better images with the help of Manual settings in the camera. With the help of the Manual setting, you can click the image as per your choice by making some adjustments in its ISO setting and by adjusting the blurring effect. These all settings are available below the camera to make you a professional photographer. Click here for more details...
https://medium.com/@abchimankar/dslr-5-best-ways-to-improve-creative-imagination-and-creative-images-31e10e9d1755
['Technical Vision']
2021-04-25 14:58:39.180000+00:00
['Photos', 'Learning', 'Technology', 'Photography', 'Business']
976
Storage Devices: Yesterday, Today, Tomorrow.
Storage Devices: Yesterday, Today, Tomorrow. What the future of Data Storage Devices looks like? I came across an article a few days ago that primarily focused on the new developments being made in the field of storage devices. Data is being generated at the rapid pace today and with that, the tools for storing large data is also being demanded. However, this seems like a race that will be lost if no further developments are made to the existing storage technology because the rate of increase on data generation will outweigh the rate of increase in storage capacities. Credits: Stupid Friends, Pinterest Have a look at the picture above. A 5 MB bulky storage occupying almost 4–5 rows of an airplane, costing a whooping $120,000 in 1956. A 5 MB storage device (that bulky). A high-definition image now days occupies more than 5 MB of space. On the right hand is a $60 chip-like looking device that can easily be placed on the tip of a thumb. And its capacity is 64 GB (almost 13000x more than the device on the left). This refers to the progress made in this field over the last few decades. But such devices may not be able to suffice the demand of today or the growing demands of tomorrow. The right question to ask now is, what is the future of storage devices? More often than not, most people do not realize the developments being made in the field of technology. A new technology comes, we find it little hard to adapt to it, we then finally adapt to it. After sometime, it becomes inseparable from us. But how often do we think about what goes into making this sophisticated piece of technology. Almost never! Some of us do not even know how a pen drive functions. Nonetheless, the storage device tech is progressing fast and the developments are of unprecedented and revolutionary nature. One such development I would like to discuss in this article is DNA Data Storage. The name encompasses the technology in its true sense. DNA is a hereditary material in almost every organism that stores information as a code made up of 4 chemical bases: adenine (A), guanine (G), cytosine (C ), and thymine (T). Data can be stored in the sequence of these letters. Are you connecting the dots now? DNA Storage is possibly going to become the new form of information technology. DNA stores genetic information about our body but they are being developed by scientists to store non-genetic information as well. Instead of the traditional way of storing information in bits (that is using binary digits 0 and 1), DNA stores information in form of strings using 4 potential base units (ATCG). Let us further understand DNA storage through its advantages and disadvantages. Advantages of DNA Storage In very simple words, DNA stores information about our body. Not only that, DNA stores information for a long time. It has a half-life of over 500 years. If stored in cold conditions, DNA is capable of remaining intact for hundreds of thousands of years. It is very stable so much so that information has been retrieved from a fossil horse that lived more than 700,000 years ago. Most of our hard drives or pen drives gets corrupted or damaged within a few years and information stored can be easily lost at any point. DNA storage eliminates this problem of hardware storage due to its stability. Comparison of Information Density The most important aspect of DNA is its storage capacity. As can be seen from the above graph, a single gram of synthetic DNA can store up to 215 petabytes of data! At that density, all the world’s current storage needs for a year could be well met by a cube of DNA measuring about one meter on a side. Also, one unique feature of DNA is that it automatically creates copies of the data. To retrieve the stored data back, the same sequencing machine are used that are used for analysis of genomic DNA in cells. The information is then converted back to binary digits for further usage. This process can destroy the DNA as it is read — but that’s where those backup copies come into play: There are many copies of each sequence. And if the backup copies get depleted, it is easy to make duplicate copies to refill the storage. The above mentioned benefits does classify DNA storage into a paradigm-shifting technology. However, this sophisticated technology has its own set of disadvantages. Disadvantages of DNA Storage For new technologies, cost is always an issue and DNA storage is no exception. In 2012, researchers encoded 0.74 MB of data in DNA at a cost of $12,400 per MB. This means over $11.5 million per GB. This is very expensive considering the cost of storing data today using normal silicon-hardware technology is in cents per GB. However, this cost is decreasing and will further decrease in the future. In 2017, the price of encoding data to DNA had fallen to $3,500 per MB (almost 3.5x cheaper in 5 years). Also, the number of organizations involved in the development of this technology has more than doubled since 2010, reaching 411 as of 2016. The second disadvantage of DNA storage is the speed of reading and writing data is fairly slow compared to other currently-used devices. This makes it not a go-to option where data is needed quickly. Instead it can be best used to store data as archives. Conclusion DNA technology is a technology to look forward to, given the advantages it possesses and problems it can solve. However, it has its own set of disadvantages, such as cost, handling issues, etc, that makes this technology available to only a very selected portion of the society at present. Nonetheless, improvements are bound to happen as investments up to $1 billion have been poured over the last decade in synthetic biology companies. The question is, will it be available to the general public and if so, when will it be available on an industrial scale? For more information, check out this article: The Future of DNA Data Storage Thank you for the read. I sincerely hope you found this article insightful and as always I am open to discussions and constructive feedback. Drop me a mail at: [email protected] You can find me on LinkedIn.
https://medium.com/swlh/storage-devices-yesterday-today-tomorrow-80cee19b3bf
['Ishan Choudhary']
2020-05-25 03:04:39.204000+00:00
['Information Technology', 'Database', 'Data', 'Data Science', 'Future Technology']
977
Apple’s Newest Phone Is Worth It…for Photographers
Apple’s Newest Phone Is Worth It…for Photographers I took this photo with an iPhone 12 Pro Max. Photos courtesy of the author Padding down my steps one morning, I noticed the brightly lit reflection of my casement windows on the den floor. It was before sunrise and I quickly surmised that this was sunlight streaming through my window. Approaching the windows, I could see the Waning Gibbous Moon still high in the crystal-clear sky. I grabbed my DSLR and shot a few photos of it from the vantage point of my window. My 200 mm lens lets me get quite close, but I also tried to get a shot that captured the window frame and put Earth’s satellite in context. The shots came out well and I shared them on Instagram. Less than an hour later, I glanced out at the now bright blue sky and noticed that the moon was still clearly visible. I had another idea for a photo, but this time I grabbed Apple’s new iPhone 12 Pro Max. It has a 2.5x optical zoom which is no match for my big lens, but I still thought I could do something interesting. I stepped out into the brisk autumn air, positioned myself behind one of my small backyard trees, and then tried to frame the moon behind some of the leafless branches. One of my earlier attempts I tapped on the 2.5 zoom and found that the moon was still just a tiny gray and white blob in the background. Next, I decided to push past the optical into digital zoom and noticed that I could bring in a bit more lunar landscape detail. However, to hold onto the focus and the lower exposure enough to keep the moon’s luminance from blowing out the image, I had to hold my finger on the screen, right over the moon, until Apple’s Camera app locked exposure and focus. With that done, I could frame the moon more aesthetically behind a few of the branches, which were, naturally, now blurry. I’d normally prefer a tripod at this level of zoom, but the iPhone 12 Pro Max has an extra image stabilization capability called “sensor-shift image stabilization,” which lets the image processor float on an x-y axis to reduce whatever shake you introduce while holding the phone. With that in mind, I simply put my elbows on my chest, spread my feet apart, took one deep breath, and snapped the shot. I was honestly stunned at the quality. Sure, the moon image breaks down if you zoom in on the photo, but I’ve never before captured anything quite like it on a smartphone. The results are a combination of lens quality, sensor stabilization, optical and digital zoom levels, and, of course, powerful-real-time image processing on the A14 Bionic CPU, and Smart HDR (which was turned on for this photo). If there was ever an image that, for me, represented a tipping point for smartphone photography, this is it. People often ask me if there’s a reason to buy the iPhone 12 Pro over the iPhone 12 or if they should get the 12 Pro Max over the 12 Pro. In both cases, it’s the camera technology that makes the difference. Apple’s iPhone 12 is an excellent flagship smartphone, but it lacks even a 2x optical zoom. I rely on that zoom to get me closer to objects both near (for macro shots) and far away. In the case of the iPhone 12 Pro Max, this is your case study. If you want near pro-level photography in a smartphone form factor, iPhone 12 Pro Max offers that by pushing optical zoom to 2.5x and adding the impressive sensor shift technology, which takes optical image stabilization to the next level.
https://debugger.medium.com/iphone-12-pro-max-camera-is-next-level-stuff-a0f39757d7e0
['Lance Ulanoff']
2020-12-04 23:13:09.987000+00:00
['Gadgets', 'iPhone', 'Apple', 'Technology', 'Photography']
978
Best Frontend Options for a Better Performing Website
Vanilla JS The rise of frameworks has made developers shy away from using vanilla JS. When I say vanilla JS, I do not necessarily mean JavaScript in particular. It could be TypeScript compiled to JS. What I mean is removing the framework from what gets downloaded to the customer. I mean, I hear a lot about frameworks not being heavy, which is true. They are not heavy but the time lost is usually because the CPU has to go through all the work of parsing it and compiling it on the client-side, especially on these tiny little phones. Another thing we can do is defer or delay the execution of the JavaScript code on the client. Concurrent Mode in React If you thought React hooks were a big deal, concurrent mode is an even bigger deal. Concurrent mode is about baking better support for asynchronous behaviour, right into the framework itself. I mean interruptible rendering, transitions, suspenses e.t.c Facebook has done a lot of work in trying to figure out what perceived performance feels like to the consumer and then building that support for creating UIs that do that, right into the framework itself. I think we are going to see other frameworks start to go that way as well. For example, Vue has gone into getting kind of React hooks with Vue.use(). Is it possible to do all that without the framework though? Like actually not have a framework on the client at all? Yes. It can be made possible with custom elements and web components. Custom Elements and Web Components Did you know that there is a component-based “framework” built right into the browser itself? That is what custom elements are. That means that you can use components on the client-side without losing the milliseconds/ to bring down your framework code. That is a huge performance boost right there because you are avoiding the compilation and parsing phase that I talked about before. A lot of sites out there are using custom elements including YouTube. If it is good enough for Google and YouTube, it might be worth checking out, don’t you think? One way to think about custom elements is that they are portable components, meaning that you can reuse them in different contexts. Like if you have a button and a carousel as part of the design system language of your site, you can reuse those in your React, Vue, and Angular code. So that is less code that you have to write in React, Vue, and angular, and you get more consistency because it is always the same across all three. Another use for custom elements is micro frontends. Micro Frontends Think about micro frontends as breaking down your page or site into sections like header, footer, product display e.t.c I do not mean little components that are put together, but big chunks of functionality. So high-level components that may work together, or share a little data, but mostly work on their own and can be versioned and deployed independently. Another thing is that you can use different languages to develop them. For example, one of your components could be in vanilla JS if it is not that complex, another could be in React if there is a lot of specific complexity in its behaviour. The JAMstack I am not really going to write much about this here because I have done an in-depth article on what it is all about. I really recommend that you look at it. With JAMstack, You do not have to worry about servers. You do not have to worry about monitoring them. You will get much better latency because when you deploy them with S3, particularly on a CDN, the customers are getting access to your code right there within just a few network hops. What I am trying to say is that it all comes down to being smart with the tools that we have to get better performance. We have browsers now that can manage and handle their own components with custom elements, we’ve got CDNs ( content distribution networks) which can put our code out there within a customer’s reach. We have micro frontends. This means that you can go and construct the page and independently deploy in version parts of the page. When you use frameworks, you are going to have expanded support for interruptible rendering, which is going to make them feel a lot more performant than they have in the past.
https://medium.com/javascript-in-plain-english/best-frontend-options-for-a-better-performing-website-7590e4dde4ab
['Samuel Martins']
2020-11-06 08:28:05.311000+00:00
['Web Development', 'Software Development', 'Front End Development', 'Technology', 'SEO']
979
Water And Blockchain: Everything You Need To Know.
It’s hard to imagine a single day without running water. Well, 4 million people woke up to this living nightmare in Cape Town, South Africa last year. Officials predicted the city had 6 months of reserves left. Through collective action and some unusually lucky rainfall in the region, they narrowly avoided running out of water. This is not the end of South Africa’s water problems. Climate change and poor water management continues to contribute to water scarcity, raising tensions at political, economical and neighbourhood levels. Other countries face the same fate. Beijing, Tokyo, London, Istanbul, Barcelona and Mexico City will all see their day zero in the next few decades unless the world acts fast. But empty water reserves aren’t the only problem. In 2018, 2.1 billion people still don’t have access to safe drinking water. Yet it is so easy to take for granted. After all, more people have access to clean, running water than ever before. Most of us assume the water coming out of our taps is cheap and infinite. It is only when we look at the bigger picture that we realise just how valuable water is. The True Value Of Water There are 326 million trillion gallons of water on earth. 97% is salt water and undrinkable. 2.5% is fresh water, but most of this is trapped in the poles or deep underground. This leaves us with roughly 0.4% to share amongst 7 billion people and even more livestock (1.5 billion cows and 19 billion chickens). It also has to be shared with our entire industry and agriculture sectors in a way that supplies them with enough water to function. Climate change, ageing infrastructure and the developing world aspiring to consume like the West all drag us further into water scarcity. To make matters even worse, humans are quite famous for their inability to share resources fairly. Wars have been fought for water throughout human civilisation. The cycle of corruption and conflict continues to this day. Problem #1: Water Conflict Take Yemen. A vicious cycle of water mismanagement has fuelled a political crisis and led to a society highly distrustful of central government’s ability to control their water supply. Or how about the long standing conflict between Indian States Karnataka and Tamil Nadu. An increasingly drier climate over the last few years has reignited fighting in the area. Legal battles triggered massive protests as people fought for the water that keeps their families alive. Photo by Patrick Beznoska Jumping to the other side of the world, water was privatised by the Bolivian Government in 2000. This escalated into the “Water War of Cochabamba”. After wide-scale conflict, the city’s water was renationalised and received new legal backing. Yet due to recent water scarcity, some areas of the Bolivian countryside lost 90% of farmland during the worst drought in 25 years. The less water in a region, the more valuable it is. For many communities and families, this is a matter of life and death. But for a small handful of people, it is an opportunity for money and power. Problem #2: Water Monopolies The financial world has been watching the world’s fresh water supply very closely. Goldman Sachs predicted that water will be the petroleum of the 21st century and they have acted with vested interests accordingly. This investing powerhouse is not alone. Other powerful finance groups like JP Morgan, Barclays, HSBC and Allianz are consolidating their control over water. But water grabbing isn’t just for financial institutions. Wealthy tycoons, such as former US President George H. W. Bush and oil magnate T. Boone Pickens, are buying thousands of acres of land with aquifers, lakes, water rights, water utilities, and shares in water engineering and technology companies all over the world. So why is this a problem? Take the well publicised case of a citizen from Oregon, USA. Gary Harrington was put in prison for collecting rainwater on his private land. To put this into perspective, billionaire T. Boone Pickens owns more water rights than any other individual in America. He has rights to drain 200,000 acre-feet a year. But ordinary citizen Gary Harrington is arrested for collecting rainwater runoff on 170 acres of his own private land. When laws are designed for the powerful to stay powerful, inequalities like this are an inevitable consequence of centralised water management. If water is controlled or owned at a single point of power, it will always be vulnerable to corruption or mismanagement. Gary Harrington arrested for 30 days for collecting rainwater on his private land To solve these problems of unfair water distribution and monopolies, we need more transparency. Can the fourth wave of industrial revolution provide the answer? Blockchain technology plans to provide water transparency on a massive scale. Blockchain Solution #1: Water Transparency Blockchain is a secure, transparent and distributed public ledger that records transactions between parties. If a public blockchain is used for water quantity and quality data, the information can’t be hidden or changed by the corrupt behaviour of governments, corporations or powerful individuals. This has ushered in a new level of access to information and real-time approach to water management. The data on water quality and quantity can now be used to make better decisions in times of increasing water scarcity. Imagine households, industry consumers, water managers and policy makers all using this information to decide when to conserve or use water. Compared to a centralised water manager prone to corruption and vested interests, this seems like an ideal step in our evolution towards a fairer and smarter water system. Pilot studies have been launched in Hong Kong by the company WATERIG. They are developing collection points that captures rainwater. These hubs are connected to water processing systems and then directed into applications like vertical farming (literally growing food on the sides of skyscrapers) and urban greenhouse projects. Because the system is decentralised, different communities can decide what works best for them and then use the blockchain to crowd-fund their own water hubs. The WATERIG trial in Hong Kong is aiming to empower communities to be more resilient and self-reliant with their water supply. It takes the tragic case of Flint, Michigan in USA to remind us the damage that poor infrastructure and negligent government leadership can do. Scientists and doctors warned officials of high levels of lead in tap water. Sadly no action was taken until it was too late. In early 2016, Flint’s Governor requested a Presidential Declaration of a major disaster and emergency and asked for $28 million in aid. Some of this went to treating children with lead poisoning. Could a blockchain based water management system prevent a disaster like this? WATERIG’s decentralised water collection points are trying to reduce the strain on existing government water collection and treatment infrastructures. This is vital with urban populations skyrocketing and infrastructures deteriorating. Empowering communities and small companies to profit from rainwater inspires local economic opportunities. Rainwater usually gets wasted, but when communities have the opportunity to process their own rainwater using the blockchain, they will be more motivated to recycle it. They could even create small businesses that thrive from it. The World Economic Forum reported that rainwater could be used to cool and power factories or to create fertiliser. The community is better off and pressure is reduced on the systems that normally provide these services. To put this into perspective, a 2017 UN World Water Development Report found that 80% of rainwater doesn’t get processed. In other words, wasted opportunities. As water becomes more scarce with climate change and an increasing world population, communities will be further incentivised to take advantage of their water collection hubs. Blockchain Solution #2: Co-operation for Smarter Water Management Could water efficiency and management be improved even more by a combination of technologies? Blockchain is co-operating with the Internet of Things (IoT) to make our cities’ water systems smarter, safer and more efficient. Let’s take Mexico City’s water distribution network. Nearly half of their water is wasted in their network from leaky pipes. With an IoT based water management system, things would look very different. The city’s water distribution network would have smart sensors collecting data on water pressure and quality. Using the internet, these sensors send each other data to analyse for leaks, pipe bursts and contamination. They then rapidly send out alerts to notify water managers and modify water pressure to avoid further damage. Put simply, IoT allows complete insight, automation and control over every part of a cities water network. This same technology can be applied in your home. Smart sensors communicate with each other to identify unusual water consumption. The system can help you see your water consumption clearly, helping you change your behaviours and save money. It alerts your mobile immediately if there is a water leak when you’re not at home. It even closes the main water supply automatically, preventing further damage. Water data across cities is valuable but as with anything that is automated, we should be concerned about privacy and security. In IoT systems, all data goes to a single point of security intelligence. All the systems decisions are made here, making it vulnerable to hacking and manipulation. Just imagine the cities water automation systems in the wrong hands — it could cost millions and be a threat to national security. Blockchain removes this single point of failure in an IoT system, by enabling device networks to protect themselves in other ways. The collaboration between blockchain and IoT water network changes the way we manage water, without compromising our safety. Is This Enough? Technologies like blockchain and IoT have made us realise that our current water infrastructure is completely outdated to serve the demands of our modern world. The severity of our global water crisis has demanded innovation. Thanks to technological progress and increased awareness, we have reason to be hopeful. However, there are still huge challenges to overcome. Digitalising outdated water infrastructures, or replacing them entirely, is no easy task. Before these next generation infrastructures can be used by an entire city, it is essential that we future proof them from hacking and manipulation for our own safety. This requires cooperation and forward thinking from governments, communities, industry leaders and businesses. It is also vital that we become more aware of the true value of water. Technology can help us make giant leaps in efficiency and management, but if we keep treating water like an infinite resource, we will continue to see Day Zeros around the world. Our personal water usage only accounts for 8% of the world’s freshwater. Agriculture dominates by using 70% — the majority of which is used to rear livestock. The Western world is famous for its meat consumption and the developing world is following its footsteps. As water becomes scarcer, it’s a luxury we may not be able to cultivate on a global level. We must strive to become more globally conscious of how water is distributed, and make forward-thinking solutions that work alongside the next generation of technologies. More reading from this author: Climate Change, Blockchain And The Paris Agreement: A New Hope Blockchain And Energy: Everything You Need To Know. Time to start a conversation I want to hear your thoughts. Let me know what you think about using blockchain to improve our water management below.
https://medium.com/hackernoon/blockchain-and-water-everything-you-need-to-know-b7e753108715
['Oliver Russell']
2018-10-31 17:21:54.340000+00:00
['Blockchain', 'Blockchain Technology', 'Water', 'Environment', 'Innovation']
980
nwChats: What is a hackathon?
“A hackathon is a 24 to 36 hour invention marathon. We get a whole bunch of people together in a room and we tell them, “build whatever you want”. Over the course of the next day or two, they build amazing inventions, things like self driving cars for under $100, or an app that can tell you if you have a concussion and how bad it is.” — Nick Quinlan, Major League Hacking (New York, USA) “A hackathon is basically an event where a bunch of people, usually students, come together to build really cool software and hardware projects from the ground up in a limited amount of time. It’s really high adrenaline, a lot of fun, and you get to meet a community of people that is really passionate about the same thing as you.” —Kristine Clarin, Hack the North (Ontario, Canada) “A hackathon is basically a very hype, one-weekend event where where you can just go and innovate and create applications or prototypes of different products. Often, it’s a very, very hype environment where many people strive for the best during that weekend.” — Juraj Mičko, HackKosice (Košice, Slovakia) How do hackathons actually… work?
https://medium.com/nwplusubc/nwchats-what-is-a-hackathon-7b5032011487
['Rebecca Xie']
2020-12-03 21:06:01.137000+00:00
['Technology', 'Beginners Guide', 'Hackathons', 'Programming', 'Community']
981
Scale-up Spotlight: A conversation with Co-founder and CEO of Sendcloud, Rob van den Heuvel
Scale-up Spotlight: A conversation with Co-founder and CEO of Sendcloud, Rob van den Heuvel We spoke with the CEO and Co-founder of Sendcloud, Rob van den Heuvel, about the growth of his company and… Top Business Tech Dec 22, 2021·3 min read We spoke with the CEO and Co-founder of Sendcloud, Rob van den Heuvel, about the growth of his company and how it has adapted to the post-Covid-19 world. Sendcloud is an all-in-one shipping platform for e-commerce businesses that want to become international giants. Rob van den Heuvel came up with the idea for Sendcloud back in 2012 when he and his partners Bas Smeulders (Co-founder and COO) and Sabi Tolou (Co-founder and CCO) all ran an online store in phone accessories. “Things were going great, and our businesses were growing,” he explained, “but we were struggling with shipping as it was both time-consuming and expensive. After some beers, we decided to come up with a solution ourselves and asked a friend to help us build the software. Sendcloud was born, and the rest is history!” Sendcloud started in the Netherlands but has quickly become one of the fastest growing scale-ups in Europe, with more than 23,000 customers across the UK, France, Germany, Spain, Italy, Belgium, and Austria. Its current customers range from small to enterprise-sized online retailers in industries ranging from fashion and electronics to food and drink. Sendcloud differentiates itself in the market by being customer- and solution-centric, saying: “technology is at the heart of everything we do here”. Its end-to-end product covers the entire process, putting everything they need in one platform. In comparison, most retailers typically use three or four separate tools for shipping, labels, order picking, and returns. Through its carrier and partner network, Sendcloud positions itself as one of the European leaders in enabling retailers to grow internationally. It’s also fully scalable, catering to businesses of all sizes, whether they ship 40 parcels a month or 400,000. Covid-19 has massively affected businesses in the last two years, and e-commerce gained rapid momentum because of the lockdown measures as high street stores closed and people flocked to online shopping in huge numbers. When asked about Sendcloud’s last milestone as a company, van den Heuvel said: “it has to be rapidly expanding our team to keep up with this growth. We started with 140 employees in 2020 and recently reached the milestone of 300 employees” reiterating our status as one of the fastest growing companies in Europe. Where other companies have had to slow their work down to adapt to the new changes, Sendcloud has found itself in a state of flux. This meant hiring people even faster than planned and managing all of this remotely. The biggest challenge for Sendcloud according to van der Heuvel has been scaling up the business even faster than originally expected whilst making sure new members are onboarded properly and staff remain motivated whilst working remotely. E-commerce has grown tremendously during this past year, but now that shops have reopened and consumers have returned to the high street, this growth will naturally slow down. However, it can still be expected that many consumers will opt for online shopping in the long term. Lockdown has really caused an acceleration of an ongoing trend that ensures their safety and gives them peace of mind in a dangerous world. Read More: In the post-Covid world, many consumers may be craving the physical shopping experience and excitement that you simply can’t provide online. However, consumers have realized the full range of benefits of online shopping, from the peace of mind to deliveries built around their schedule, and have made it part of their routine. Covid-19 has set an irreversible trend when it comes to online shopping, and an omnichannel approach to shopping is here to stay as consumers get used to this new shopping environment. Click here to discover more of our podcasts For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin! Follow us on LinkedIn and Twitter
https://medium.com/tbtech-news/scale-up-spotlight-a-conversation-with-co-founder-and-ceo-of-sendcloud-rob-van-den-heuvel-ef41443afff3
['Top Business Tech']
2021-12-22 12:14:14.639000+00:00
['It', 'Cybersecurity', 'Technology', 'Business', 'Scaleup Spotlight']
982
As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time
As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time Ram R vs Mies A Live Tv Nov 19, 2020·6 min read Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://www.deviantart.com/ncflive/commission/FREE-Ram-R-Salisbury-J-vs-Krawietz-K-Mies-A-Live-Stream-Free-1410812 https://www.deviantart.com/ncflive/commission/STREAMING-Krawietz-K-Mies-A-vs-Ram-R-Salisbury-J-Live-Stream-Fr-1410814 https://www.deviantart.com/ncflive/commission/LIVE-Krawietz-K-Mies-A-vs-Ram-R-Salisbury-J-Live-Stream-1410816 https://www.deviantart.com/ncflive/commission/Watch-Ram-R-Salisbury-J-vs-Krawietz-K-Mies-A-Live-Stream-free-1410817 https://www.deviantart.com/ncflive/commission/StreamS-watch-Krawietz-K-Mies-A-vs-Ram-R-Salisbury-J-Live-Stre-1410818 What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@shitolliveonn/as-people-get-older-they-tend-to-think-that-they-can-do-less-and-less-when-in-reality-they-e16ede251e34
['Ram R Vs Mies A Live Tv']
2020-11-19 17:55:00.156000+00:00
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
983
Blink Camera Not Recording Clips
Blink Camera Not Working | Blink Camera Not Recording Clips | +1–800–983–7116 Lieke Follow Jul 5 · 3 min read If your Blink Camera Not Working properly then, check the internet connection, placement of Blink Security Camera, and the motion detection settings. There are many reasons why you should trust the Blink security camera. It includes the sleek design, zero monthly fees, data storage, Alexa integration, etc. But, sometimes you may face one or other kinds of Blink security camera-related problems which include, Blink Camera Connected but Not Working, Blink Camera Not Recording, How to Reset Blink Camera, Blink Camera Not Detecting Motion, Blink Camera Not Working Red Light or Blink Camera Offline. In this guide, we are going to discuss the troubleshooting methods for one of those issues. Before the end of this guide, your issue will be resolved and the security camera will start working. Why Is My Blink Camera Not Working? A number of factors could be responsible if your Blink Camera Not Working properly. It includes the weak internet connection, loose cable connections, and distance between the base station and camera. In addition to it, don’t forget to check the motion detection settings of your security camera. How to Fix When Blink Camera Not Recording? We are going to tell you some of the easy and effective troubleshooting methods. If you want an immediate solution then dial the given toll-free Blink Camera Customer Service Phone Number. Check the Wifi Connection It could be possible that your wifi has stopped working. This is the main reason why your Blink Camera Not Recording Clips. Hence, contact your network service providers or else try to resolve the issue by restarting the router. 2. Restart the Router Many times, the technical glitches could be easily resolved just by a quick restarting method. To restart the router, turn OFF the button and remove the power plug from the wall outlet. Wait for a while and then reinsert the power plug again. Turn ON the router and check if the issue gets resolved or not. 3. Verify the Cable Connections It could be possible that you have not connected the cables properly. This is why your Blink Camera Not Working properly. So, verify all the cable connections and replace the ones that are damaged. 4. Move Base Station Close to the Router You may have placed the base station far from the router. This is the main reason why your Blink Camera Offline and not working properly. Hence, minimize the distance between the security camera and the base station. 5. Restart the Blink Security Camera To restart the Blink security camera, remove the power plug from the wall outlet and wait for 2–3 minutes. Now, reinsert the power plug again and turn ON the security camera. Check now if the issue gets resolved or still the Blink Camera Not Working properly. Steps To Disable the Motion Detection of Security Camera It could be possible that your Blink Camera Not Detecting Motion or someone has turned it off from your Blink security app. Don’t worry, we have already arranged the steps that will help you to set the motion detection. Open the Blink camera application on your smartphone. The app can be downloaded from the play store or app store. If you are a new user then create the account from the official Blink security website. Once the account will be created successfully, open the camera settings from the app. The slider that is given below the Motion Detection should be turned ON. You need to arm your systems when the motion detection button is enabled. You can now check if the issue gets resolved or still, Blink Camera Connected but Not Working. Conclusion If your Blink Camera Not Working properly then, you must check the internet connection, motion detection settings, and placement of the base station. In this guide, we have mentioned all the steps to resolve the Blink camera-related issues.
https://medium.com/it-helpdesk-tips-tricks/blink-camera-not-working-blink-camera-not-recording-clips-blink-camera-offline-8e1452d8e1b1
[]
2021-07-05 06:45:42.875000+00:00
['Security Camera', 'Security', 'Services', 'Technology', 'Tech']
984
Personal Capital Mobile App Review
What is Personal Capital? Personal Capital is an online tool, available from your desktop or phone, that will help you: Monitor all of your financial accounts in real-time — whether checking account, certificate of deposit, or retirement account Get objective investment advice designed to make you — not the advisor — money Provide investment options that are tailored to your goals How Personal Capital Works Personal Capital offers a free version and a premium version that features direct investment management. Whichever version you use, your account is actually held by Pershing Advisor Solutions, who acts as trustee for your account. I can’t speak personally to the premium account, but the standard free account allows users to seamlessly integrate all their investment data in a single page. Although free users are offered investment advice with the occasional pop-up notice or email, the platform itself does not have any annoying third party ads. The Free Version With the free version, you get full use of the Personal Capital platform as well as a free consultation from a financial advisor (Click Here to download the app and get $20 for using the free app!). That advisor will give you a personalized analysis of your investments and recommendations as to what you can do with your portfolio. Your financial advisor can be contacted by phone, email, or by online chat. In fact, the only feature that differentiates the free version from Personal Capital’s premium product is their personalized portfolio management. Other than that, the free version includes all of the many features and benefits that are available on the platform, including the ability to aggregate all of your financial accounts, the 401(k) analyzer, objective investment advice and investment check-ups, a real-time financial dashboard, and access via the mobile app. The Premium Version Also known as their Wealth Management program, Personal Capital’s premium program includes active management of your investment portfolio. Like other similar products, they first determine your risk tolerance, personal preferences, and investment goals. Using that evaluation, they then create a portfolio tailored to fit within those parameters. The fee structure for this service is as follows: 0.89% of the first $1 million 0.79% of the first $3 million 0.69% of the next $2 million 0.59% of the next $5 million 0.49% on balances over $10 million These fees are quite reasonable when compared with fees of 1% to 2% that are customarily charged by active investment management services. The fees apply only to the assets you have under management at Personal Capital, and not to other investments that may be aggregated on the site, such as your 401(k) plan. And, that’s it. There are no additional fees. Personal Capital does not charge trading, commission, administrative, or any other types of investment fees. Your only cost is the annual, all-inclusive percentage that applies to your portfolio level. Investment Strategy. Personal Capital uses Modern Portfolio Theory (MPT), to manage your portfolio. MPT focuses less on individual security selection, and more on diversification across broad asset classes. Those asset classes include: US stocks (which can include individual stocks) US bonds International stocks International bonds Alternative investments (including ETFs and commodities) Cash Though Personal Capital makes use of funds in constructing your portfolio, they may also include up to 100 individual securities in order to avoid being too heavily concentrated in a small number of companies. Personal Capital uses an integrated investment approach to managing your investments, which is a pretty unique feature. This means that they factor in all of your investment holdings — including those not managed by Personal Capital — in managing your portfolio. For example, though they don’t manage your 401(k) account, your 401(k) allocations will be considered when making decisions about your investments that are actually managed by Personal Capital. Personal Capital Tools and Benefits There are many benefits to using Personal Capital to streamline your financial accounts, including: The Investment Checkup This tool analyzes your investment portfolio and gives a risk assessment of it, to make sure that your level of risk is consistent with your goals. This will help you to create an asset allocation that will get you where you need to go with your investments. Retirement Planner You can often find retirement planners or retirement calculators on various sites throughout the Internet. But what better place than to have it available where you also have all of your investment accounts listed? Personal Capital’s Retirement Planner allows you to run numbers on your retirement to make sure that you will be prepared when the time comes. It allows you to incorporate major changes in your life into your retirement planning, such as the birth of a child or saving for college. Net Worth Calculator Since Personal Capital aggregates all of your financial accounts on the same platform, they can also provide you with ongoing monitoring of your net worth. This will enable you to get the most comprehensive view of your financial situation since it not only takes into account your assets but also your debts. Net worth is the best single indicator of your overall financial strength, and this will give you an opportunity to track it. Cash Flow Analyzer Though our focus in this article has been primarily on the investment side of Personal Capital, it’s important to recognize that it also includes a budgeting capability. The Cash Flow Analyzer tracks your income and expenses from all sources, letting you know where you’re spending money (or spending too much of it), which will help you to make adjustments that will improve your overall budget. Mobile App
https://medium.com/escaping-the-9-to-5/personal-capital-mobile-app-review-7490d31a55ea
['Casey Botticello']
2020-05-06 02:30:34.185000+00:00
['Investing', 'Business', 'Venture Capital', 'Entrepreneurship', 'Technology']
985
Part 2: Lightning Network, Bitcoin’s Crossing the Chasm Superpower
Before diving into this post, if you haven’t already, you should read Part 1: The Dual Adoption Curves of Bitcoin. That post explains how Bitcoin adoption can be understood by exploring its dual adoption curves in the context of the Diffusion of Innovations theory. In this post, I’ll explain the concept of ‘Crossing the Chasm’ and hypothesize that the Lightning Network is Bitcoin’s Crossing the Chasm Superpower. Crossing the Chasm In the early 1990s, Geoffrey Moore studied the relationship between the burgeoning tech startup ecosystem and the Diffusion of Innovations theory. In his book, Crossing the Chasm, Moore posited a minor, yet enormously consequential, tweak to the Diffusion of Innovations theory. He hypothesized that between the early adopters and the early majority existed a massive gap, the chasm, which any technology needs to traverse to go mainstream. Otherwise, if a product could not cross the chasm, it would wither in the doldrums of its niche market and never reach full, mainstream adoption. The Chasm exists because of the major psychological and social differences between early adopters and the early majority. Early adopters are visionaries searching for revolutionary change. They see a problem with the status quo and they actively want to adopt technologies to fix that. They are willing to take big risks to drive an order of magnitude change. On the other hand, the early majority are risk averse pragmatists who are focused on the day to day problems that exist within their specific vertical. They are searching for incremental, reliable solutions from established players. The early majority is not at all influenced by early adopters. Hence, the chasm… Moore spends the rest of the book outlining the key strategies necessary to effectively cross the chasm. To briefly summarize, the key to crossing the chasm is relentless focus on one specific market segment, a beachhead. Moore says, “Target a specific market niche as your point of attack and focus all your resources on achieving the dominant leadership position in that segment.” Within that target market, you need to build out a “whole product.” A whole product includes the core product, but also, whatever you need to achieve your compelling reason to buy. This can include additional software, hardware, systems integration, installation and debugging, training and support, standards and procedures, etc. Early majority customers expect a product to just work for their use case. Once a dominant market position is achieved in this beachhead segment, the product can expand strategically into adjacent markets using the reputational gains from the initial segment to convince other members of the early majority. Airbnb is one of the best examples of successfully crossing the chasm. Originally, Airbnb started as a service where people booked travel accommodations during only the busiest moments in a specific city (i.e. during a major conference). But, it has since grown to become one of the most popular ways to book any accommodation (travel or otherwise). Airbnb accomplished this feat, in part, by focusing on delivering a “whole product” for its early hosts. For example, initially, growth was slow in New York. So, the founders went there to identify the problem. Although the accommodations were up to standards, the photos of the accommodations were terrible. So, they hired professional photographers to take pictures of the accommodations. This decision kick-started growth for the platform. Airbnb didn’t start off in the photography business, obviously, but in order to solve the needs of its initial early majority users, it created a “whole product” that incorporated a photography service. With decisions like this, Airbnb successfully crossed the chasm into the mainstream. Bitcoin’s Crossing the Chasm Hurdle As discussed in Part 1, for Bitcoin, overall, the initial adoption has been driven mostly by Bitcoin, the asset. Given the current point on the overall adoption curve along with the projected rate of adoption, it is clear that over the next 10 years, Bitcoin will start to move past those early adopters towards the early majority. To do so, Bitcoin will have to cross the chasm. The early majority, as risk-averse pragmatists focused on the day-to-day problems within their specific verticals, might struggle to grasp the narrative around Bitcoin as purely an investment asset. The early majority, when perceiving Bitcoin as exclusively an asset, will not comprehend how it can directly help them solve their day to day, vertical specific problems. So, in all likelihood, in order for Bitcoin to effectively cross the chasm, reach the early majority, and go mainstream, Bitcoin’s positioning will need to evolve. So, the pertinent question is: how will Bitcoin cross the chasm to reach the early majority? To answer that question, let’s circle back to Moore who says the key to crossing the chasm is relentless focus on one specific market segment, a beachhead. Focus all resources on building a whole product for that specific segment. This strategy to cross the chasm does not fit neatly with most of the existing or previous narratives around Bitcoin. Those narratives encompass general use cases for broad swaths of the population. For instance, the narrative of a cheap payments network is exciting and interesting. Any business could use that, right? Well, given its broad scope, any business could use it. But, given the lack of a whole product incrementally solving a problem in a specific niche that just works, no business will use it. While early adopters don’t mind general narratives and half-baked use cases, the early majority will only adopt if there are specific use cases that incrementally solve their existing problems. Bitcoin’s Crossing the Chasm Superpower For Bitcoin to cross the chasm into the early majority, it will need to meet the unique needs of specific niche audiences who are focused on day to day problems within their business. These audiences want solutions that make their lives significantly easier and give them more time to focus on their core value propositions. If they’re pitched Bitcoin exclusively as an asset, based on their needs, they might not be compelled enough to explore and adopt. But, these early majority users could be very intrigued by the potential solutions enabled by Bitcoin, the network, or a better form of money. Over the past few years, Bitcoin has developed a superpower in this race to cross the chasm. The Lightning Network will help transition Bitcoin from the early adopters who mostly use Bitcoin, the asset, to the early majority who will use Bitcoin, the network. Before diving into that statement, let’s quickly zoom out and define the Lightning Network in the simplest terms possible. The Lightning Network is a software layer on top of Bitcoin that allows for instant, low fee, and high volume transactions. The network is based on a technology called payment channels. Payment channels allow for any two Bitcoin users to open up a channel with a defined amount of Bitcoin between themselves. Within a payment channel, these two users can exchange instant, nearly free payments with each other as long as the amount of Bitcoin in the channel meets their bi-directional payment needs. Lightning Network takes this technology and builds a network of payment channels in a fully decentralized, permissionless way. That network of payment channels allows for anyone on the network to pay someone else on the network in an instant, cheap way even if they don’t have a direct channel between them. For example, let’s say Alice wants to pay Bob. But, Alice doesn’t have a channel with Bob. However, she does have a channel with Carol who happens to have a channel with Bob. Alice can route the payment to Bob through Carol. And, because of cryptographic proof and the smart contracts inherent in the design of the Lightning Network, this transaction can occur without Alice needing to trust Carol. It’s also important to highlight that the Lightning Network is a protocol layer built on top of the Bitcoin protocol. The Bitcoin protocol, powered by adoption of the early adopters, has become an almost trillion dollar, decentralized, global, always-on monetary network. So, the Lightning Network provides the ability to instantly, cheaply, and premissionlessly transact with Bitcoin, a nearly trillion dollar global monetary asset. And, the Lightning Network is completely open source. Anyone can contribute to it and anyone can build with it. Finally, as Lyn Alden discussed in her piece on Bitcoin network effects, the Lightning Network may have even stronger network effects than the Bitcoin base layer. The main constraint for the Lightning Network is liquidity. Namely, liquidity in the right places with a sufficient number of unique channels between unique nodes. As more and more liquidity is added and more channels are created and maintained, this will allow for more adoption, which kicks off a virtuous cycle. So, Lightning Network is Bitcoin’s crossing the chasm superpower. It gives developers the ability to build on a stack that provides instant, cheap, and permissionless movement of value anywhere in the world at any time. With the Lightning Network and motivated entrepreneurs, Bitcoin, the network, unlocks the power of human ingenuity and optionality in its race to cross the chasm. It can harness the power of entrepreneurs to build compelling uses for niche audience segments leveraging a programmable, global and internet native money. These entrepreneurs will be able to use the open source protocol to solve their payment routing and cost needs. It provides Payment Infrastructure as a Service in the same way AWS provides Cloud/Hosting Infrastructure as a Service. So, instead of wasting valuable bandwidth solving a difficult general problem, entrepreneurs can focus on building useful tools for their specific audiences. With the Lightning Network, Bitcoin will get an almost infinite number of ‘shots on goal’ to build compelling products and use cases for the early majority. The best historical example to demonstrate the power of human ingenuity and optionality in helping to cross the chasm comes from smartphones. In January 2005, RIM (Blackberry) reached 2 million subscribers marking a major milestone for smartphone adoption. However, it was only the very beginning of the adoption curve. RIM subscribers were the innovators. They were high-powered business people who needed to look at documents and send emails on the go. Then, the first iPhone launched in 2007 with the first Android phone coming a year later. The early iPhone and Android users were the early adopters. They saw the value in having the ability to listen to music, make calls, watch videos and access the internet over their mobile devices. But, smartphones didn’t actually reach the early majority of users until mid 2010. In order to get there, smartphones needed the launch of the App/Play Store in 2008/09. With that launch, developers were able to build on a stack that gave them a touch screen interface, internet connection, location data, and much more. Given that resource, developers built countless remarkable experiences that users wanted and, eventually, could not live without. The app ecosystem drove smartphones across the chasm into the early majority. The early majority didn’t need a smartphone to send emails or listen to music like the innovators or early adopters. But, they did need a way to get home from the bar (Uber). They did need directions to the new sushi restaurant to meet their friends (Maps). They did need a way to easily chat with friends who may not live in the same country (WhatsApp). Potential Lightning Use Cases So, what will early majority users need from Bitcoin, the network, an instant, cheap, global, always-on monetary network? Within just the past couple of years of development on Lightning, we’re starting to see glimpses of potential early majority use cases and solutions that might be just around the corner. I will outline just a few of those below. There are many more out there right now. And, even more to come that haven’t even been conceived of yet. International Remittances When someone from Country X wants to send money to a family member in Country Y, this is called an international remittance. International remittance payments cost, on average, 6.5% of the amount sent, with lower amounts actually requiring a higher percentage fee. Those prices can be even higher in countries with less formal financial infrastructure, which are often the places that most need remittances due to inflationary pressures and stagnant economies. Even at that cost, international remittances can take anywhere from days to a week to process. Strike On Lightning Network, Strike has built a product that allows users to send money instantly, with no fees, anywhere in the world. So, you can take dollars from your account in the United States and send that directly to a user in the UK who receives that money in pounds instantly with no fees. This transaction is possible because the USD is instantly converted to BTC sent to the UK and instantly converted back to GBP. With this product, Strike immediately solves a real problem for millions of users. That’s a fact, not a hypothesis. In the past two months, Strike launched in El Salvador, the sixth highest ranked country in inbound remittance from the United States. 24% of the nation’s entire GDP is remittance based. After Strike’s launch, it became the #1 most popular app in the country after only 3 weeks. These users are using Bitcoin + Lightning, even if they don’t know it. Card Rewards Credit card companies use points as a user acquisition mechanism with the promise that the more you spend, the more reward points you’ll receive, which can be used for travel, experiences, and more. However, the industry’s dirty little secret is that ‘your’ points are at risk of inflation, devaluation, and, even, cancellation. Enter Fold. Fold allows for users to earn Bitcoin on all of their spending. Rather than accumulating transitory credit card points, users are able to stack hard money with every single purchase. Just in the past few months, Fold has processed over $100M worth of transactions and distributed billions of satoshis to its users. And, in the future, they will launch a Bitcoin Rewards API so that other card companies can create credit cards which allow for users to receive Bitcoin back rather than points. Fold, Earn Bitcoin on your spending Creator Economy: Value for Value In the tech and venture capital world, the creator economy has been a hot sector for years. Recently, Li Jin of Atelier Ventures, a prominent thought leader in this space, posted the following, which suggests that platforms impose an egregious tax on their most valuable users, those that create the content. Platforms are able to impose these take rates because of their monopoly on user attention. That monopoly on user attention is driven in part by the creators of the content, but also, by the user needs around content discovery and the user unwillingness to pay for broad access to content. As creators grow their followings on these platforms, they are increasingly finding better ways to monetize those users off the platform through tools like Patreon or Substack. So, despite not being willing to pay for the initial discovery platform, once fans find their creators, they are willing to pay for content from that specific creator. That said, price discovery is still ongoing with regard to what fans are willing to pay creators for each type of content. Sphinx Chat Streaming Payments On Lightning Network, Sphinx.chat built a product which allows for creators to receive micropayments when they chat with or podcast for their fans. For podcasts, podcasters can set the cost of listening to each episode at a certain number of sats per minute. Additionally, users can tip additional sats while they listen. By creating this product with Bitcoin and Lightning, the payments can be extremely small (micropayments aren’t possible in the legacy financial system). So, users can tip as granularly as they want and hosts can set a very low cost per listen. And, creators can grow their paid following beyond their own borders as they don’t have to manage currency conversion. As the size of the audience builds, the streaming sats paywall along with tipping could actually drive meaningful revenue for creators. Users and creators could build a value-for-value model where users pay directly for the content itself. If this model gets traction amongst creators, it could start to crack the existing internet content model around monopolizing user attention and relying on ads to deliver value back to content creators. Gaming + Metaverse Since the invention of digital gaming, games have tried to mimic the use of money by creating unique tokens that can be earned in the game and exchanged for in-game products. Roblox is one of the most well known examples of such an experience. Games created these tokens as a way to reward users and encourage them to purchase status in the game (outfits, tools, weapons, etc.). In this way, countless games have built up these walled garden economies. Even though it would potentially drive more engagement, games are not able to incorporate real money into their experience due to the costs associated with transferring that money between players and from the game to the players. On Lightning Network, Zebedee has built a platform which allows for game developers to incorporate real Bitcoin payments into their experience. Instead of points with no value outside of the walled garden of their games, game developers can easily incorporate Bitcoin payments. This platform allows for developers to add real skin in their game, which can make the experience much more engaging for end users. Earning Sats Stakwork is a Lightning based platform for permissionless microtasks. Through the platform, anyone, anywhere can accomplish small tasks and get paid in Bitcoin. In the last 6 months, workers on Stakwork have been paid for 3M tasks, totaling 171,000 Lightning payments. They have 20,000 workers onboarded in places like Argentina, Nigeria, Ghana, Turkey, and the Philippines with a waiting list about as long. Real Time VPN Impervious built a dynamic Lightning based VPN, currently in closed beta. Impervious generates cryptographically secure tunnels that ensure data remains private both at rest and during transit, while also shielding the source of data transmissions. Bitcoin + Lightning Network = A Bet on Human Ingenuity When the App Store launched, no one immediately knew that Uber, Doordash, SnapChat and Instagram were going to make smartphones irreplaceable. No one knew which specific experiences would resonate with end users in the early majority. But, Apple and Android bet on the power of human ingenuity to figure out compelling use cases on top of this new, powerful tech stack. We find ourselves in a similar position with Bitcoin today. Bitcoin, the asset, has kickstarted the adoption curve of Bitcoin overall. And thus, we’re rapidly approaching the crossing the chasm moment. Bitcoin, the network, will help Bitcoin evolve to solve the specific problems of the early majority. The Lightning Network gives Bitcoin the superpower of relying on human ingenuity and optionality to do that. It lets developers and entrepreneurs build on top of a global, always-on, permissionless nearly trillion dollar network to create compelling products for their niches. What will those hit products be? No one knows. But, I’m betting on Bitcoin + Lightning Network. Because, at this point, that’s just a bet on human ingenuity.
https://medium.com/@michael-levin/part-2-lightning-network-bitcoins-crossing-the-chasm-superpower-7fe4fd4702dc
['Michael Levin']
2021-05-27 20:21:42.626000+00:00
['Technology', 'Lightning Network', 'Developer Tools', 'Bitcoin', 'Cryptocurrency']
986
When you’re younger, it feels like you have to please the entire world. You don’t.
Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://gitlab.com/gitlab-org/gitlab/-/issues/285194 https://gitlab.com/gitlab-org/gitlab/-/issues/285192 https://gitlab.com/gitlab-org/gitlab/-/issues/285195 https://gitlab.com/gitlab-org/gitlab/-/issues/285196 https://gitlab.com/gitlab-org/gitlab/-/issues/285197 https://gitlab.com/gitlab-org/gitlab/-/issues/285198 https://gitlab.com/gitlab-org/gitlab/-/issues/285199 https://gitlab.com/gitlab-org/gitlab/-/issues/285200 https://gitlab.com/gitlab-org/gitlab/-/issues/285201 https://gitlab.com/gitlab-org/gitlab/-/issues/285193 What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@nadalvstsitsipaslive/life-is-a-journey-of-twists-and-turns-peaks-and-valleys-mountains-to-climb-and-oceans-to-explore-d1b5d6e4d837
['Rafael Nadal Vs Stefanos Tsitsipas Live Tv']
2020-11-19 19:19:26.783000+00:00
['Technology', 'Sports', 'Social Media', 'News', 'Live Streaming']
987
Technological Revolution of Wearable Devices
We’re seeing an increase in completely different types of interaction models, signaling a major change in how we think about and communicate with technology. Recognizing, comprehending, and profiting from today’s evolving wearables environment is critical to the success of a number of businesses. Check the disclaimer on my profile. Furthermore, I recently learned that these other companies were said to be developing a next-generation version of the iHelp Medical Alert System that would be more advanced, improved, and targeted toward Telehealth. (1) Are you eager to learn about and see the next wave of emerging innovations that could benefit the healthcare industry? This is one you could certainly not skip. In recent years, we’ve seen new technological advancements in the field of wearable technology, advancements that have the potential to transform life, industry, and the global economy. Long-held views on how we use data in our everyday lives and social experiences are being challenged by the wearable revolution. sponsored post. And I discovered that the global wearable technology market could hit approx. $81.5 billion in 2021 and approx. $109 billion by 2024. (2) Is this enough to ensure their dominance in the industry? More information could be found here. A number of industries are said to be working on new and advanced wearable technology, especially in the health care industry, which is developing health care trackers in addition to fitness trackers. It’s exciting to be able to contribute to the future of the technological revolution! The wearable technology industry’s future might be bright! Source 1: https://www.livescience.com/topics/wearable-technology Source 2: https://www.toptal.com/designers/ui/the-psychology-of-wearables
https://medium.com/@soranofinge/technological-revolution-of-wearable-devices-623d9334ec8a
['Sorano Finge']
2021-04-21 11:50:20.990000+00:00
['Technology', 'Health', 'Devices', 'Finance', 'Revolution']
988
DAY 9: Intergalactic Initiative for Partners within the IGGalaxy & Beyond
DAY 9: Intergalactic Initiative for Partners within the IGGalaxy & Beyond Yesterday, we were pleased to confirm that we will be listed on LATOKEN exchange in January 2019. This exchange listing comes with with four base pairings; BTC, ETH, USDT and LATOKEN. The LATOKEN Exchange is currently ranked 31 in Coin Market Cap’s Top 50 Cryptocurrency Exchanges by trade volume,with over a $100 million 24-hour trade volume. This is comes at a pivotal time as our Alpha development is progresses well. As we enter the competitive scene, this increase in liquidity is sure to see the proliferation of further esports organisations and players joining the IGGalaxy. We have said it before, and it certainly will not be the last time we say it: adoption is coming. For Day 9 of our Intergalactic Countdown, we are unveiling an Intergalactic initiative that will further our platform to provide exposure and value to some great causes! We will use our platform to generate support for our Partner teams, allowing them to compete against the heavyweights in the industry. Their success is our success; we believe it is important that the community can support our Partners in their endeavours to scale up. We will also look to utilise our platform to provide support to our wider community. As Super Representatives on the TRON Network, we feel an obligation to improve the wellbeing of less fortunate people around the world through opportunity and investment. 1) Partner Team Support Our Intergalactic platform will have a dedicated section for Partner teams to submit plans for community support. Each Partner team will have an allocated wallet, and an opportunity to call on the support of our Galacticans in the IGGalaxy. Our Partners have already demonstrated their hunger to expand. Each have set realistic objectives that they are capable of reaching; however, as we have witnessed, the current esports landscape is not profitable for all. In many cases, those that have the ability to succeed in this industry rarely do so; this is something that together we can change. Our Partners are raring to go, they just need you! From today, our Partners will be able to submit their plans for the upcoming year. We will then share these plans with Galacticans and TRONICs, who will then be able to make informed decisions as to whether to support their plans. These plans may range from Partners wishing to improve their rosters, attending esport events and other organisational development plans. The benefits of supporting our Partners will vary, depending on their plan. 2) Community Support Community members will also have the opportunity to receive funding for projects they are working on. This initiative may be an opportunity for gamers to engage our community; for instance, supporting with attending tournaments, perhaps. These gamers can essentially use this as a platform to generate funds to support with costs for competing, with the intention of repaying that value to the ecosystem as a whole. We feel as though involving the community in this initiative will undoubtedly inspire innovation among our Galacticans and TRONICs alike; innovative projects that will undoubtedly benefit the TRON Network. Like our Partners, community members will be required to submit proposals outlining the wholistic benefits to the ecosystem. Our community members will vote on each, and contribute IGG and/or TRX to individual projects they believe in. We will release more information detailing how to participate in this initiative in the coming weeks. 3) Global Support Binance are proving to be a leading presence in the crypto-world as they strive to utilise blockchain and cryptocurrencies for social good. Binance Charity provides a platform for programs and campaigns to improve the wellbeing of less fortunate people around the world. One particular active project, Restore Bududa Program, is providing life saving resources to those affected by the Bukalasi Sub-county Disaster in Bududa District. Four hours of heavy rainfall destroyed river banks, subsequently leading to disastrous floods and mudslides. The disaster victims are in desperate need of shelter, food, and healthcare. The IG Team believe this is a fantastic cause and have all contributed to donate 0.37 BTC (£1150) [https://www.binance.charity/projectDetail/2]. Our plan is to utilise our platform to be able to carry out a similar function, whereby Galacticans and TRONICs can contribute to similar projects. More information regarding these initiatives will be released in the coming weeks. A Call to Arms! Our Intergalactic Community have proven themselves to be true visionaries in the future of esports. More importantly, they are keen to see issues of limited accessibility addressed. Those that contribute to the growth of our Partners will be rewarded for doing so. We have always been extremely grateful to those that have supported our vision; as we grow, our desire to reward those that continue to add value to the IG x Esports ecosystem grows. Secondly, and by no means least, those that contribute will be integral to IG x TRON revolutionising the esports landscape. We will look forward to Partners submitting their proposals for the coming year. Following on from this, we will call upon all members of the community to stay vigilant and provide support where they can. Please follow us on our various social media channels to keep up to date with developments: Website: www.iggalaxy.com Twitch: https://www.twitch.tv/igg_esports Reddit: https://old.reddit.com/user/Intergalactic_Gaming Twitter: https://mobile.twitter.com/official_igg Facebook: https://www.facebook.com/IGGalaxy/ Instagram: https://www.instagram.com/intergalactic_gaming/ LinkedIn: https://www.linkedin.com/company/intergalactic-gaming Medium: https://medium.com/@info_91865 IG TELEGRAM GROUPS https://t.me/IGAnnouncements— ANNOUNCEMENTS CHANNEL https://t.me/IGgge- Galactic Grand Exchange (GGE) https://t.me/IGG_Official— ENGLISH (MAIN) https://t.me/IGG_Korean— KOREAN https://t.me/IGG_Spanish— SPANISH https://t.me/IGG_German— GERMAN https://t.me/IGG_French— FRENCH https://t.me/IGG_Dutch — DUTCH https://t.me/IGG_India- INDIA Esports Telegram Groups: https://t.me/IGFIFA— FIFA https://t.me/IGRocketLeague— Rocket League https://t.me/IGFortnite— Fortnite https://t.me/IGOverwatch— Overwatch https://t.me/IGNHL— NHL Fantasy Football Telegram Groups: https://t.me/IGNFL— NFL https://t.me/IG_FF— Fantasy Football (BPL) Partner Telegram Groups: https://t.me/MazerGaming— Mazer Gaming https://t.me/TronEsports— Tron Esports https://t.me/GSINesports— GSIN Esports https://t.me/DemiseEsports- Demise Esports https://t.me/SangalEsports- Sangal Esports https://t.me/FuegoGaming- Fuego Gaming https://t.me/ProtonGaming- Proton Gaming https://t.me/MaskedEsports- Masked Esports https://t.me/FFSeSports- For F1FA Sake https://t.me/TEAMKaiRoS- TEAM KaiRoS https://t.me/TronWalletMe- Tron Wallet Me
https://medium.com/@intergalacticgaming/day-9-intergalactic-initiative-for-partners-within-the-iggalaxy-beyond-a7119e6fcaf5
['Intergalactic Gaming']
2018-12-30 00:06:06.562000+00:00
['Esports', 'Blockchain', 'Blockchain Technology', 'Crypto', 'Charity']
989
Blockchain Trend in 2019 #1: The Weak Died, the Strong Evolved
1.1 Three Core Characteristics are Vital for Blockchain Technology 2018 witnessed a clean up of the blockchain scene as prices dipped and overhyped projects shut down or went to courts — what we long waited for. But at the same time, pragmatic developers of platforms like EOS, Hyperledger, Corda, and Stellar kept on building what matters. In 2019, their struggle for leading positions will increase and focus on applicability, interoperability, and innovation. Therefore, we expect the improvement in these areas and maximum compliance with market requirements as new trends in blockchain technology. 1.2 Enterprise-Ready Infrastructure is Forming Consortium blockchain is the first and the most obvious way to apply DLT technology in private setups. Enterprises сollaboration options were very limited before the blockchain. They used to share monthly/yearly reports on certain market trends which is pretty time inefficient. Another option is to build custom integrations for API access to their enterprise infrastructure and data. This approach has always raised concerns regarding privacy, security, data leaks, and breaches. As a result, enterprises often remain associated with a closed and conservative environment. Private blockchains, on the other hand, can offer sharing only the necessary data between participants securely at the same time avoiding potential compromises regarding sensitive data security perimeter. Recent use-cases include We.trade, BITA, MOBI, and Food Chain. Almost inevitably, consortium blockchains led to dedicated blockchain infrastructure solutions bloom: AWS, Azure, IBM, Oracle, Google, to name a few. This step, often referred to as Blockchain-as-a-Service, established the standards for enterprise blockchain infrastructure, security, resilience, and maintenance. Consequently, the entrance threshold shrunk as these standards became widely accessible. After a series of such releases, enterprises paid tribute to particular blockchain technology solutions. The latest blockchain trends suggest that BaaS will continue the evolution in 2019, and new setups working on specific tasks will emerge. We expect plug-and-play templates to solve typical business problems such as: Personal data management Supply chain tracking Secure transactions storage And many others We are convinced it will give a new impetus to increase mass adoption. Therefore, we are going to witness further competition, standardization, and relevant commercial applications of the blockchain platforms. How exactly do different decentralized platforms compete with each other? Currently, the success criterion lies in the successful solutions for the blockchain trilemma addressed for a specific target segment. Blockchain Trend in 2019 #2: Solving the Trilemma The trilemma for the blockchain world means that among the given three variables — scalability, decentralization, and security — any two will always succeed at the cost of the third. Hence, instead of racking one’s brains over this trilemma, it is better to select the optimal system. 2.1 Scalability is Not a Concern Anymore Scalability is a system’s ability to grow without slowing down its transaction speed. For example, it is not one of Ethereum’s strength since smart contracts are executed by the whole system, which has a lot of users. Hence, it cannot significantly increase capacity. Competitors try to solve the problem in a different way. For example, a network can rely on pre-selected “qualified handlers” who are allowed to handle smart contracts only if they possess the required minimum of computing power and good connection. EOS increased its scalability at the cost of decentralization. 2.2 Decentralization Seems Like Fair a Trade-off Decentralization stands for the “number of people taking part in the consensus.” The more access various players have to decision-making mechanisms, the less likely sabotage or misuse are. EOS sacrificed its decentralization for a better scalability through the Delegated-Proof-of-Stake. As a result, a small number of people gained most of the power, making it easier to agree with decisions that are more advantageous to them. This has actually happened. The dispute was resolved, but the credibility of the system was severely undermined. 2.3 Security Remains a Focus for Developers Therefore, Ethereum still sits at the top of the chain. The first public platform for Dapps, it has established an active community, built a rich ecosystem, and remains a blueprint for numerous decentralized projects. Although many have been expecting improvements from the project, it hasn’t compromised on the third parameter of the trilemma, security. Hackers stealing money is unacceptable and, yet, frequent. Whether concerned with centralized crypto exchanges or errors in smart contracts, security must always remain a top priority for DLT systems. Private blockchains have the advantage of appointing known participants which boosts the throughput significantly. It allows them to advance security since identification information required to verify a transaction is stored and processed through the system. This can be achieved by various technical means: confirmation of cellphone numbers, fingerprint scanners, ID and photo identification, but would reduce the scalability of public chains. Blockchain Technology Trend in 2019 #3: Innovative but Compliant 3.1 We will See Consistent Growth of Blockchain Applications What does 2019 hold for us? According to the current trends of blockchain technology, there will be much less hype and more valuable growth. The industry has woken up from the illusionary dream and brushed off the ICO dust. Due to the hard selling of blockchain potential, the public became tiresome of the term (even a beverage company manipulated investors with the word!). Gradually, the separation of the DLT field from cryptocurrencies will widen as the image of blockchain improves. 3.2 Fintech Remains the Hottest Sector Instead, there is a clear FinTech trend: stablecoins and blockchain enabled custodian services flood the market. Lightning Network and other sidechain initiatives cut the costs of transactions substantially while bringing even more people to the global economic pie. It will make Bitcoin great again and expand a truly global, fast, and reliable financial infrastructure. Logistics and supply chain follow suit as the investment in blockchain development continues. 3.3 There are Still Legal Limitations for DLT Also, legislative pressure on blockchain companies is increasing, which, in turn, may jeopardize decentralization. A well-known GDPR legislation protects personal data and defines ethical rules for its processing. Under this convention, if a company tries to abuse a customer’s data, then he can contact the law enforcement agencies to investigate. It turned out that it is impossible to use the services in Europe without violations when processing data in foreign data centers. Google has already been fined for 50 million euros for related offences, but the law also pressures businesses using public blockchains for data processing. For instance, if your company uses Ethereum smart contracts to handle personal data, it is considered of violation of GDPR. “Sharding algorithm” illustrates the principle that avoids the situation. Separate nodes carry out particular transactions off the main network and send the results only to the blockchain after that. Conclusion on Key Blockchain Trends The competition between key blockchain platforms and frameworks tightens and delivers solid infrastructure, better interface, and new functionality. This blockchain trend will further advance standardization and interoperability in 2019. Governments will transition from exploring to partly deploying DLT. Besides, as blockchain infrastructure improves, and more solutions such as service and ready architectural templates emerge, blockchain adoption will keep growing.
https://medium.com/hackernoon/blockchain-trend-in-2019-1-the-weak-died-the-strong-evolved-2b4e8075f8b4
[]
2019-07-13 10:18:02.649000+00:00
['Technology', 'Blockchain Development', 'Blockchain Technology', 'Blockchain', 'Trends']
990
Soar’s Global Super-map SPUR’d on by WA Government Grant
Soar has made further steps towards the commercialisation of its global Super-map with the awarding of a grant from the Western Australian State Government’s SPURonWA program administered by Landgate. Soar was one of 8 winners of the SPUR grants which are focused on the development of location-based technologies. The SPURonWA grant will assist Soar through supporting the legal and marketing aspects of Soar’s commercialisation activities. “The SPURonWA grant is just another validation of Soar’s innovative approach to decentralised mapping on the blockchain” said Soar CEO Amir Farhand. “We appreciate the support that grants like this can provide to businesses like Soar who are developing new geospatial technologies.” Soar is already advancing with its development and has released its Test Net demonstration at demo.soar.earth which allows users to upload and download drone images and now incorporates satellite imagery from the European Space Agency’s Sentinel series of satellites. The release of Soar’s Main Net is planned for February 2019. The original Media release by the Western Australian State Government announcing the SPURonWA grant winners can be found here.
https://medium.com/coinbene-official/soars-global-super-map-spur-d-on-by-wa-government-grant-afc01b810d1
[]
2019-02-25 03:18:02.821000+00:00
['Mapping', 'Coinbene News', 'Blockchain Startup', 'Blockchain Technology', 'Cryptocurrency']
991
Making progress against our mission: Introducing the Pluralsight One Impact Book
At Pluralsight, we see first-hand every day how technology makes the impossible, possible. Technology skills give people power over their lives and careers. It’s why we founded our social enterprise Pluralsight One — to advance our mission of democratizing technology skills and create significant, lasting social impact. Today, I’m excited to share with you our first-ever Pluralsight One Impact Book, which outlines our growth, progress and evolution. While this book is not entirely comprehensive, it will give you insight into major highlights and milestones from 2018. Decades ago, when President John F. Kennedy detailed his vision to land on the moon, and that dream became reality only eight years later — the term “moonshot” was born to describe a huge problem, breakthrough technology and a radical solution. Democratizing technology skills is our moonshot, and Pluralsight One is on a mission to realize that dream. Together, we have an incredible opportunity to unlock skills that create agency. We can promote lifelong learning, turn consumers into creators and help communities solve their toughest challenges. And we can’t do it without you. We believe in transparency and accountability, so we can reflect on our work, refine our solutions and accelerate outcomes. And, I’m extremely proud of what Pluralsight One has accomplished over the past year. Thank you for supporting our mission and enabling the growth of Pluralsight One. We are at a tipping point in history — with technology poised to spark a movement towards prosperity or extreme inequity and vulnerability. We can tip the odds in the right direction. Together, we are one. You can view the full Impact Book here.
https://medium.com/pluralsight/making-progress-against-our-mission-introducing-the-pluralsight-one-impact-book-24778bade144
['Aaron Skonnard']
2019-02-07 17:38:54.612000+00:00
['Skills', 'Technology', 'Social Impact', 'Pluralsight One']
992
How to choose a Bitcoin wallet? A short guide
Bitcoin wallet — how to choose? With plenty of Bitcoin wallets in the market, it feels quite hard to choose one. Each one says it’s the best; often they have similar features. Looking up user reviews is a good idea, but it only leaves a general impression and doesn’t tell if the wallet will be comfortable specifically for you. To choose the best Bitcoin wallet, answer the following questions: do you need a cross-platform, a desktop-only, or a mobile-only wallet? Do you need ultimate security, or is usability more important? Are you ready to pay money to buy the wallet? Let’s overview the most common types of Bitcoin wallets and see how they suit the needs of different users. Bitcoin hardware wallets — ultimate security These wallets look like flashcards, and they store your private keys inside. Hardware wallets are also called cold wallets because they store your keys out of the internet’s reach — this reduces the risk that your keys can be stolen in an attack. The most popular Bitcoin cold wallets are Ledger and Trezor. Hardware wallets are considered the most secure ones on the market, so use them if you have considerate stashes of Bitcoin and it’s critically important to preserve them. Hardware wallets cost about $80–100. To sign any transaction, you’d need to insert your hardware wallet into the computer. This makes cold wallets inconvenient for those who make many transactions daily and especially from their cell phones. Also, it’s not recommended to have a cold wallet for those who are prone to losing small things. Bitcoin desktop wallets — enhanced usability Desktop wallets are hot wallets — they are connected to the internet. This doesn’t mean wallets have access to your funds: they only generate private keys that are further stored on your computer. Desktop wallets allow accessing your crypto whenever you’re at your computer. One of the prominent Bitcoin wallets is Electrum. It was created in 2011 by Bitcoin enthusiasts and hasn’t changed much ever since; as an old open-source wallet, it’s considered one of the most secure desktop wallets. As for the cons, the user interface is not perfect here, and it’s a Bitcoin-only wallet. However, it has a number of advanced security features such as 2FA, cold storage options, and on. Another trusted Bitcoin wallet is Atomic. It allows for storing 500+ crypto assets. It also has a mobile version, so we’ll take a look at it in the next paragraph. Bitcoin Mobile Wallets — accessibility above all For a Bitcoin mobile wallet, the ease of use is a key factor. Here, Atomic Wallet stands out — a multi-asset wallet with a built-in exchange that allows you to swap your Bitcoin to other cryptos instantly without leaving the app. In Atomic, you can buy Bitcoin with a credit card, monitor the Bitcoin price chart, and set the network fee manually in case you want your transaction to be delivered faster or cheaper. The same features are available for the Atomic desktop version, so you’ll have access to your coins basically everywhere. Atomic is available for Android and iOS. Cutting it short If security is your top priority, choose a hardware Bitcoin wallet such as Ledger or Trezor. If you want to have access to your crypto from your desktop, use Electrum. If you are keen on usability and want to store all your crypto portfolio in one place, choose Atomic Wallet.
https://medium.com/@changenow-io/how-to-choose-a-bitcoin-wallet-a-short-guide-e37571025c91
[]
2021-06-23 15:40:48.410000+00:00
['Cryptocurrency', 'Wallet', 'Bitcoin', 'Technology', 'Crypto']
993
Review of Grokking Dynamic Programming Patterns for Coding Interviews
Dynamic programming (DP) is a hard and a distasteful subject for the vast majority of students and makes for an equally daunting and intimidating tech interview topic. In my several years of IT experience, I have hardly been quizzed on dynamic programming questions in interviews. From a practical standpoint, an average software engineer is unlikely to employ dynamic programming algorithms in his day-to-day work. In fact, I have met software engineers who retired without ever confronting a dynamic programming problem in their careers. I have neither so far. This isn’t to say that dynamic programming isn’t important or that it has no utility. It certainly has its place but most engineers won’t find themselves butting heads with the subject in their careers . Given this context, you may wonder if it’s even fair to ask DP questions in interviews for positions that don’t require this skill-set? Many interviewers are cognizant that the position they are interviewing candidates for doesn’t require DP skill-set and will avoid related questions. But there are a few who mess it up for everybody. For example, someone at a big name company would ask a DP question to a candidate, who would then go tell her friends, who would in turn tell other people, post on forums, Quora or Blind and before you know it, a perception would come into being, that this particular big tech company has DP questions asked in its interviews. And this brings me to my main point, which is, ed-tech companies capitalize on this fear of DP questions and sell expensive courses dedicated solely to the subject of dynamic programming. The course “Grokking Dynamic Programming Patterns for Coding Interviews” is one such example. Even though the chances of running into DP questions in an interview are slim but the courses on these topics sell like hot cakes because potential candidates have heard the rumor from a friend or a friend of a friend, that interviews at company X consist of DP questions. Educative’s course is a great example of a product selling on fear more than a need. The course has been authored by Arslan of the “Grokking the System Design Interview” fame, who himself is an ex-Facebook engineer. The course does a decent job of explaining the subject matter, and is structured like a crash course. But I wouldn’t recommend buying the course as I find the price tag unjustified for something which has endless and freely available online material. If you want to prepare DP for interviews, I’d suggest sticking to one of the other comprehensive courses on Educative which include a smaller section on DP such as “Grokking the Coding Interview: Patterns for Coding Questions” or “Coderrust: Hacking the Coding Interview”. For an interview, prepare DP questions as an appetizer rather than the main course. Nevertheless, If you fall for the fear-marketing and feel tempted to buy the course, I’d suggest looking at the table of contents of the course and simply googling each question in the list. You should find plenty of free online resources for each question. The only exception when it may make sense to buy the course is when you have several months for preparation combined with the motivation and zeal to comb through each possible interview topic. Otherwise, your purchase will be analogous to the treadmill bought with great fervor and passion but few months later, used only to to dry wet laundry on. Only that, this course won’t be useful in any capacity when performing daily chores.
https://medium.com/double-pointer/review-of-grokking-dynamic-programming-patterns-for-coding-interviews-7ffc4967611c
['Double Pointer']
2020-10-12 04:01:07.462000+00:00
['Codingbootcamp', 'Coding', 'Coding Interviews', 'Technology', 'Code']
994
Remote teaching: Why a front-loaded and fast-paced class fared well in the wake of the COVID-19 Pandemic
Every teacher and professor had their share of strain brought by the ongoing COVID-19 pandemic. Through listening, adaptation, and planning, we made it to the finish line with quite a bit of energy with my students this semester. There was one key decision that helped me have a very smooth remote teaching experience: preparing a front-loaded syllabus. I made that choice consciously, but of course without a prediction about the implications of the then developing pandemic. I aimed to empower students with a broad spectrum of tools early in the semester and use the rest of the time for individual project development. This strategy ended up working well for remote teaching. Shaping a classroom through Agency, Adaptation, and Tooling Over the years, I observed that front-loaded classes and workshops fared much better in many terms. Combined with a longer exploration process, in the end, they turned into a teaching formula for achieving high-yield, high-quality results. I prepared the syllabus for the 2020 spring semester for the class I taught at MIT’s School of Architecture and Planning with three things in mind: 1- Agency When you prepare a front-loaded class or a workshop, you shock the students at their freshest state. This enables them to see a lot of material in a short period, but more importantly to pick and chose whatever makes more sense to them. Yes, no student learns everything we try to teach and it is better if we empower them with the material they are interested in. This kind of agency helps students to become a more integral part of the teaching process. If the teaching material is comprehensive and the teaching style is generous enough, students can and will have a say in what they learn. If the material is not malleable, they will either have a hard time adapting, or won’t align with the classroom dynamics at all. 2- Adaptation Teaching is about bi-directional adaptation. As the students adapt to the class and teaching material, the instructor needs to adapt to the overall drive of the class. Instructors are inclined to expect the students to adapt. But not all of them think about the fact that for a symbiosis to happen, both parties would need to act. A front-loaded class helps both parties to adjust and make choices early in the semester. I revealed the goals and mechanics of the subject in the very first class and dived right into the material. With a fast follow-up in the second week, I developed a quick sensation about the choices I made for the amount and delivery of the teaching material. In the meantime, the students came with questions to figure out if the class was the right one for them. One other benefit of a fast-paced start was to determine the no-adaptation types move out quickly. At MIT the students ‘shop’ for the classes the first week or two. If you show the intensity of the class early in the semester that helps refine the crowd. In short, if you are open to change things on the course — which in my mind is a must — a front-loaded class helps you plan earlier in the semester. 3- Fluency If you are teaching a class that includes skill-building components, there are fundamentally two tracks you can follow. 1 — You can plan to move incrementally and distribute skill-building sessions throughout the semester. This would help students to learn and digest skills over longer periods. A slower pace can also help them add skills more easily. This is a low-stress and low-risk choice, but it takes from the time that could be invested in the employment and refinement of skills. 2 — Alternatively, you can front-load the syllabus with skill-building sessions and then observe the tendencies of students in picking things up. This is a riskier move as not all the students would be able to follow the pace of the class. Students may need more support when you want to add on top of something that you have already taught in class. This would put more work-hours on the instructor. Just looking at the overall picture, the first option appears to be more logical, as the safe bet. Yet the second option, although comes with some risks, increases the chances of break-through achievements — if they are ever to happen in the class. Moving within a fast-paced setting, students hit a steeper learning curve, but at the same time become accustomed to the tools of the class earlier. This helps them become fluent in the tools they are using quicker. Especially for an application and making-oriented class, the second option works miraculously better. How so? I learned how to teach over 15 years of piecemeal teaching I have a quite mixed past in teaching and I have nowhere near the experience of a full-time academician. Yet, jumping back and forth between academia and professional practice, or spending time in both simultaneously helped me translate the strategies of teaching across these two domains. What did I do? I co-taught design studios. I developed design, geometry, and scripting classes. I happened to initiate and lead an undergraduate design program, somehow early in my career. Last but not the least, I conducted many workshops in different schools, cultures, and countries. Especially these workshops that ran anywhere from three hours to two weeks taught me a lot about developing syllabuses, even more so than semester-long classes. The diversity of student’s backgrounds, ages, and interests taught me a lot as well. While in professional practice, I happened to teach people whose age was (more than) double of mine. Later, I found chances to teach fresh out-of-the-high school kids. Over and over again, I discovered front-loaded scenarios fared better. Starting vertical (and going deep) and then going horizontal (and expanding). I applied this strategy to my latest teaching adventure. I asked the students to develop a “design” that had to be re-thought, letting go of its preconceived “parts.” My motivation stemmed from my ever-unfolding inquiry about part-whole relationships that I explained in my latest story: I deployed the teaching material through 4 tracks: Presence, Function, Quality, and The Whole.
https://medium.com/age-of-awareness/remote-teaching-why-a-front-loaded-and-fast-paced-class-fared-well-in-the-wake-of-the-covid-19-cdcabb0a85fc
['Onur Yuce Gun']
2020-06-22 16:38:40.909000+00:00
['Design', 'Creativity', 'Education', 'Technology', 'Innovation']
995
Bored of VS Code? Try Lite-XL
Bored of VS Code? Try Lite-XL Visual Studio Code vs. Lite-XL, a cover designed by the author with Canva. I was a die-hard fan of Visual Studio Code for three years. But I started using a lightweight alternative called Lite after Visual Studio Code started behaving similar to Visual Studio by taking all the resources that other processes wished to take. Lite is a minimal code editor written in Lua and C. It is indeed implemented minimally as much as possible. The Lite editor core is an application that consists of a multi-line textbox made with the SDL graphics library. All the other modern code editor features, such as syntax highlighting, are made as plugins. It just takes one megabyte in your disk and consumes around 20 megabytes of physical memory. However, it doesn’t offer all the required features for all developers. The maintainer of the Lite project mentioned that the project aims to provide something practical, pretty, small, and fast implemented as simply as possible — easy to modify and extend, or to use without doing either. In other words, the Lite editor itself may not deliver any features further, and if someone needs more features, they have to fork the source repository and extend. Lite-XL is an actively maintained fork of the Lite editor, and it offers almost all basic productivity features that Visual Studio Code has. Three months ago, I wrote a story explaining how Lite-XL technically performs better than Visual Studio Code. In this story, I will take you through Lite-XL’s new features that make it better than Visual Studio Code. Problems With Visual Studio Code If Lite-XL is just a code editor that does the same job as your favorite Visual Studio Code, why should you try an alternative? Well, there is a considerable technical difference between both. Visual Studio Code is built on top of the Electron framework that lets developers build cross-platform desktop apps with web technologies. Visual Studio Code is a web application that runs inside a frameless native window. On the other hand, Lite-XL is a native desktop application built with the SDL graphics library. Lite-XL works on Linux, macOS, and Windows because SDL is a cross-platform graphics library like Google’s Skia. SDL doesn’t render elements to a Chromium webview like Visual Studio Code, and it renders graphical elements natively via OpenGL or DirectX. One Lite-XL instance typically takes around 10 megabytes of physical memory — while one Visual Studio Code instance takes more than 400 megabytes. Visual Studio Code is adding new features to the editor core every day. Now it takes around 300 megabytes of disk space without any extensions, and we won’t wonder if it takes one gigabyte after several years. Ever heard of VSCodium? Even though Visual Studio Code’s source code is MIT-licensed, Microsoft makes releases with a different non-FLOSS (Free/Libre and Open Source Software) license by adding a kind of commercial flavor that includes telemetry (tracking). The VSCodium project releases the latest binary builds with the MIT-licensed codebase. However, VSCodium is technically the same Visual Studio Code which consumes above-average resources. In the worst-case scenarios, you may run multiple Visual Studio Code instances with other Electron-based hybrid desktop apps and a web browser. Then you might blame your computer’s hardware, but in reality, your computer became a playground for modern bloatware. The following story addresses this modern bloatware issue further: How To Customize Lite-XL As Visual Studio Code As mentioned earlier, Lite-XL’s features (even the context menu and tree view) typically come as plugins. However, Lite-XL core includes several crucial features such as the status bar, command executor, and file search. Lite-XL is just a text editor without any plugin, as shown below: Lite-XL core, a screenshot by the author. It looks like this if we customize it similar to Visual Studio Code. Lite-XL is customized similar to Visual Studio Code, a screenshot by the author. The memory usage will never go above 15 megabytes even after these customizations. Let’s begin the Lite-XL customization process. First of all, make sure to download the latest Lite-XL version from GitHub releases. After that, open the preferences file ( init.lua ) and add the following line to enable Visual Studio Code’s default theme. core.reload_module("colors.vscode-dark") Every Lite-XL release has pre-installed plugins such as auto-complete, tree-view, context menu, syntax highlighting for some languages, etc. But you may need to install the following plugins to make it more like Visual Studio Code. Installing a Lite-XL plugin is a piece of cake. You can copy the plugin to the data/plugins directory and restart the editor to get a particular plugin activated. You can restart the editor with the command executer by pressing Ctrl + Shift + P. The restart command in Lite-XL, a screenshot by the author. Now, install the following plugins with the above method. All plugin source files are available here. indentguide The indent guide plugin draws a vertical line per each indentation, similar to Visual Studio Code. minimap This plugin renders a visual map of the source code on the right side of the editor, similar to Visual Studio Code. Visual Studio Code renders the source code map via an HTML canvas, but Lite-XL renders it natively. Therefore, the resources usage won’t go up when you work with larger files. Additional syntax highlighting support Lite-XL doesn’t include syntax highlighting support for all supported programming languages by default. For example, it doesn’t offer you JSX and TypeScript syntax highlighting support right after the installation process. Therefore, you need to install syntax highlighting plugins as you wish. Conclusion Visual Studio Code is backed by Microsoft and has a larger developer audience around it. But Lite-XL is new and still has a small developer audience (Still less than hundreds of members on Discord). Nowadays, native application development is so underrated and getting replaced by hybrid application development created by Electron. Hybrid application development frameworks motivate developers to make native-like hybrid apps so fast by hiding performance issues with modern hardware. However, frameworks/libraries like Flutter and SDL offer better performance-first solutions to develop cross-platform applications. Also, cross-platform frameworks like Tauri and Neutralinojs try to give the Electron-like development environment with lightweight architectures. Lite-XL is built with SDL and is a truly native desktop app. Protect apps like these by using them because this could be the final era of native desktop apps. Thanks for reading.
https://betterprogramming.pub/bored-of-vs-code-try-lite-xl-76d4cb3f8dda
['Shalitha Suranga']
2021-08-26 14:35:31.698000+00:00
['JavaScript', 'Vs Code', 'Programming', 'Software Development', 'Technology']
996
DPM and Exchange Server 2007 SP1 x64
For those of you using Microsoft’s Data Protection Manager and Exchange Server 2007 SP1 x64, you will definitely want to check out this blog entry. We followed the Microsoft guides for using DPM with Exchange Server 2007 SP1 x64, but the guides give you the wrong information and you end up with consistency errors like this one: The replica of Storage group First Storage Group on mail.vanderburg.com is inconsistent with the protected data source. All protection activities for data source will fail until the replica is synchronized with consistency check. You can recover data from existing recovery points, but new recovery points cannot be created until the replica is consistent. For SharePoint farm, recovery points will continue getting created with the databases that are consistent. To backup inconsistent databases, run a consistency check on the farm. (ID 3106) Data consistency verification check failed for LOGS of Storage group First Storage Group on mail.vanderburg.com. (ID 30146 Details: Unknown error (0xfffhhc01) (0xFFFHHC01)) The error here hides the real problem. The Exchange Server 2007 SP1 x64 ESEUTIL.EXE is a 32-bit process and it failed because DPM mounts the log file volume within the System32 directory. On a 64-bit machine, SysWOW64 is where the mounts are located, but 32-bit processes use System32 instead of SysWOW64, which prevents them from viewing the mounted volumes in the SysWOW64 directory. Solution:
https://medium.com/security-thinking-cap/dpm-and-exchange-server-2007-sp1-x64-439907395cfc
['Eric Vanderburg']
2017-08-22 12:54:09.839000+00:00
['Data Protection Manager', 'Dpm', 'Exchange', 'Information Technology']
997
Top free hacking platforms
1-overthewire.org overthewire: wargames and more -practicing hacking legally You might have heard about CTFs -(capturing the flags) but have you heard of Wargames? Probably not. When I started in this field, I did not find any resource to begin with. But my interest in the field and generous help from my peers, I was able to find a way! Here I am sharing my journey from a noob to umm… still a noob :P but with some more experience in the field! Intro I was one of those who get excited by the term “HACKING”. Bollywood movies showing red colored “Access Denied” and “Access Granted” screens fascinated me. That’s when it all started. I came across https://null-byte.wonderhowto.com/ while searching for so called Wi-Fi hacking, the black side of hacking. I wasn’t really fascinated with dictionary attacks but monitor mode, handshakes were something new for me. The wargames offered by the OverTheWire community can help you to learn and practice security concepts in the form of fun-filled games. To find out more about a certain wargame, just visit its page linked from the menu on the left. Suggested order to play the games in Bandit Leviathan or Natas or Krypton Narnia Behemoth Utumno Maze … Each shell game has its own SSH port Information about how to connect to each game using SSH, is provided in the top left corner of the page. Keep in mind that every game uses a different SSH port. 2.CTFTIME.ORG There are a lot of Capture The Flag (CTF) competitions in our days, some of them have excelent tasks, but in most cases they’re forgotten just after the CTF finished. We decided to make some kind of CTF archive and of course, it’ll be too boring to have just an archive, so we made a place, where you can get some another CTF-related info — current overall Capture The Flag team rating, per-team statistics etc. 3.HACKING-LAB.ORG What is Hacking-Lab? Hacking-Lab is an online ethical hacking, computer network and security challenge platform, dedicated to finding and educating cyber security talents. Hacking-Lab is providing CTF and mission style challenges for international competitions like the European Cyber Security Challenge, and free OWASP TOP 10 online security labs. Hacking-Lab’s goal is to raise awareness towards increased education and ethics in information security. Hacking-Lab is popular: 40'000 users have registered, and numerous universities, organizations and companies are using it for enhancing their classes, trainings, etc. Hacking-Lab Approved, well-known online platform for IT security training. Bleeding-edge Attack-Defense system System for dynamic and realistic virtual battles between teams. Extensive collection of challenges More than 300 security challenges are available, covering all IT security topics. Big community More than 40'000 users are registered for Hacking-Lab. Many volunteers provide content and support. Experience We know what we are doing — security training and CTF events have been our core competence for more than eight years now. LiveCD Our Linux-based LiveCD is a perfect entry point for Hacking-Lab. It contains many hacking tools and is preconfigured for VPN access. 4.HackThisSite.org Introduction Hack This Site is a free training ground for users to test and expand their hacking skills. Our community is dedicated to facilitating an open learning environment by providing a series of hacking challenges, articles, resources, and discussion of the latest happenings in hacker culture. We are an online movement of artists, activists, hackers and anarchists who are organizing to create new worlds. This guide is an introduction to the site, community, philosophy and project organization. This project is entirely what you make of it. We encourage people to take an active role in the development of this community and to copy and redistribute this guide on their own. Hacking Philosophy We believe everyone should have free access to all information. Hacking should not be a privileged skill — everyone should have the opportunity to learn about computer security in the age of technology. We seek to facilitate a free and open training ground where people can test and expand their skills in a legal and realistic environment. Ethics Considering that several of the hacking challenges are simulated web defacements, the question of the ethics of hacking is repeatedly brought up. We like to consider hacking itself to be a tool, a skill which in itself is neutral, a means without end. It can be used for good (for the benefit of all) or bad (mindless destruction or theft). We do not encourage negative use of the information we provide. We are more concerned with the greater risks of not distributing this information and are ready to accept the consequences. 5.0XF.at What’s 0xf.at? 0xf.at (or oxfat it you prefer) is a website without logins or ads where you can solve password-riddles (so called hackits). This is a tribute site to the old Starfleet Academy Hackits site which has been offline for many years now. 6.hackthebox.eu To get the invite code for hackthebox.eu visit “Bineshmadharapu “ Hack The Box is an online platform allowing you to test your penetration testing skills and exchange ideas and methodologies with thousands of people in the security field. Click below to hack our invite challenge, then get started on one of our many live machines or challenges. THANKING YOU follow us: Bineshmadharapu
https://medium.com/@bineshmadharapu561/top-free-hacking-platforms-209a0b453786
[]
2020-12-05 17:20:00.459000+00:00
['Technology', 'Hackathons', 'Hacking', 'Education', 'Cybersecurity']
998
Deliver what matters, together.
Action plan for our communities. As partner carriers on the Mothership network help businesses around the country ship essential goods, the health and safety of these carriers is of utmost importance. Mothership advises all of its partner carriers to take all safety precautions and to only accept shipments that are in the best practice for their business. Mothership is thankful for each and every partner carrier, and is honored to join in the mission to keep communities moving during this difficult time. Partner carrier recommendations. Partner carriers are hard at work accepting shipment jobs on our platform to ensure people get what they need most during the current pandemic. We advise our partner carriers to not operate equipment if they have felt symptoms characteristic of COVID-19 (e.g., coughing, fever, shortness of breath) in the last 14 days. Please visit the CDC’s website for more information on symptoms everyone should be aware of and how to address them. Extra health precautions like routinely washing your hands with soap and water and using hand sanitizer when needed, avoiding hand-to-face contact and wearing a respirator are important to practice both on and off the job, as well as being mindful of contact with other people and objects. Customers and carriers should take precautionary measures with the loading/unloading of freight. Wash hands with soap and water before and after all interactions with freight or people in general. Take extra health precautions like routinely using hand sanitizer to disinfect hands and avoiding hand-to-face contact. Wearing a FDA approved respirator is a good health practice to follow We’re here for you. The Mothership team is working diligently from home in order to help prevent the spread of the virus and comply with direction from government leadership. We continue to be fully operational and continue to offer full freight services to all customers at this time. If you are an existing customer that needs to ship, please log in to your dashboard and continue to book freight as you typically would. For any other questions or concerns, please reach out to our product specialist team to see how we can best support your business at this time and provide reliable freight services. If you are a business looking to ship with us for the first time, please schedule a time here for a call to get on-boarded to our platform. Driving the community forward and our business referral freight discount. During this time we are committed to keeping our communities moving. If you know any business in need of freight services, please refer them to Mothership using your referral code, found in the “Rewards” tab of your dashboard. Any business you refer will receive $50 off their first shipment. Please share your code with other businesses who might need discounted freight services, especially during a time like this, and please let us know if you or other businesses are interested in donating any of the items mentioned above in our Freight Support Program. We thank you for your customer and partner carrier loyalty and consideration in helping drive this effort forward.
https://medium.com/official-mothership-blog/deliver-what-matters-together-d5a5d614f9ce
['Iñaki Pedroarena-Leal']
2020-04-21 23:25:50.596000+00:00
['Coronavirus', 'Shipping', 'Technology', 'Covid 19', 'Donations']
999
Ambient, preventive health for every home
Healthcare is changing, deeply and for good. Several years after the big flop of the “quantified self”, personal health data is back under the spotlight, sometimes in a controversial way, often in a more serious and reliable way. Personal health devices are back as well — thank you apple watch for opening the field, thank you Google for giving fitbit a new life. We are at the dawn of a new era where sensors, data and software will monitor our health continuously, detect slight changes in our patterns early, that are signs of the onset of diseases. Doctors can then treat them before they have damaging effects on our bodies and our lives, or even before they occur. We will live longer, healthier lived, in our homes, at a much lower health care cost, thanks to predictive, ambient heath care. To make this happen, it is essential that vital signs are gathered on a regular basis. Wearable devices do the job well, but not everyone wants to or can wear them all day. And not everyone can afford an apple watch. Also, none of these wearables can function 24/7 without being taken off to be recharged. Today, only a happy few of the most dedicated, the luckiest or the wealthiest of us have access to this. A small but active niche population of quantified self geeks will wear several fitbits and spire devices to monitor their vital signs continuously and try to analyze the data to detect changes, good or bad, in their health. Wealthy or very tech savvy elderly people can instrument their apartments or homes at a high price point and complexity to detect and prevent domestic issues like falls. But the vast majority of the population does not have medical sensors available. Health in the home should not be like that… There is a bootstrapping issue with the predictive and preventive algorithms too, where predictive medicine can make progress only if enough data is available, i.e. if enough patients use it. The more users, the more data, the more powerful algorithms and predictions will be. We need more affordable, more seamless and more scalable devices to bootstrap this revolution. We need devices that do not need to be worn, and require no effort from their users. We need ambient, contactless health monitoring. We, at Norbert Health, have a mission to build just that. We will radically accelerate this preventive healthcare revolution by bringing it to the masses, packaged in a very seamless, affordable and easy device. Health in the home could be more like this! We are building the first ambient medical pod for the home, that will monitor the health of your loved ones without requiring any effort from anyone. Just plug it in, turn it on and it will do the job. Our first mission is to help our rapidly growing elderly population age in their homes. Stay tuned by signing up for our newsletter. Contact us if you want to help us build this.
https://medium.com/@awinter/ambient-preventive-health-for-every-home-dd73bb678c07
['Alexandre Winter']
2019-11-27 14:03:37.935000+00:00
['AI', 'Consumer Electronics', 'Healthcare', 'Computer Vision', 'Technology']