Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
600
How Sybil Attacks Affect Social Media Startups. A Tru Story.
Last night, TruStory experienced its first Sybil attack. In a Sybil attack, one person creates multiple accounts in order to exploit a network. In the case of TruStory, one person created multiple accounts and then had his accounts constantly agree with the arguments created across his other accounts, thus artificially boosting how much it seemed like people agreed with his arguments. This attack came after the single largest day of growth for the application. The growth was largely driven by a hotly contested claim that was the featured debate: feminism is a greater threat to humanity than climate change. As one might guess, this claim attracted a crowd with incredibly diverse perspectives. Growth for a social media company always brings in people with varying view points. In particular, the one person performing the Sybil attack went on to create a slew of accounts with controversial, and to be quite frank, outright racist/antisemitic, names: Accounts created by single person Here is an example of an argument made by one of the attacker’s accounts: We want to be clear that the TruStory community did not downvote this user’s arguments because of the content itself. The TruStory community is not against users posting controversial arguments and having those arguments be discussed in a productive manner in accordance to the values of TruStory. The primary reason the community downvoted this user’s arguments was because the attacker had created accounts purely for the intent of upvoting their own arguments, which in turn would artificially accrue him TRU points and make it seem like more people agreed with his arguments. This sort of behavior goes against the values and mission of TruStory, as the community wants TruStory to be a place where people can have productive debate on controversial topics without artificial manipulation. We are creating mechanisms to ensure that users cannot easily create a significant amount of accounts to constantly agree with each respective account’s arguments. TruStory is all about bringing people to together discuss controversial issues in a rational manner. Come see what other hot debates are happening now!
https://medium.com/trustory-app/how-sybil-attacks-affect-social-media-startups-a-tru-story-1be8bb34d855
['Mattison Asher']
2019-10-24 21:39:24.833000+00:00
['Social Media', 'Technology', 'Debate', 'Startup Lessons', 'Startup']
601
Meet the Man Who Wants to Reimagine Virtual Reality
Meet the Man Who Wants to Reimagine Virtual Reality “Imagine being able to perform one of the world’s most dangerous and technically difficult stunts with little to no training, no parachuting experience, no cost for equipment and setup, and no risk of death trying to pull it off. What would you do (and how much would it cost) for such an experience?” — JUMP James Jensen. Source: Samantha DeRose from The McRae Agency. I got to interview James Jensen, the CEO of JUMP, a virtual reality (VR) company designed for users to perform stunts with no training, parachuting, no cost for equipment, and no safety risks. As an amateur with no experience, expertise, or money to feel like virtual reality was accessible, Jensen dispelled my notions. Jensen has an objective for the virtual reality industry: to make it more accessible instead of having it mainly be perceived as an entertainment and video game tool: “I want people to know that hyperreality simulations are ways for us to learn new things to ourselves and to learn experiences that could influence our walking life,” Jensen told me. Jensen is trying to make a hyper-reality base jump adventure where people can experience the thrill of stunts without the experience and danger required. After leaving The VOID at the beginning of 2018, Jensen had stylistic differences with the VOID’s direction and wanted to make an extreme sports virtual reality company where people who experience virtual reality through JUMP don’t just have an escape away from reality, but take away vital aspects to improve their own lives. He was inspired to make JUMP a company as an evolution of what he was working on at his previous company, THE VOID, a virtual reality experience that allows someone to travel to their favorite film. THE VOID currently has experiences in Jumanji, Nicodemus, and Ghostbusters to travel to a virtual world. A big inspiration for creating JUMP was a friend and advisory board member, Marshall Miller. Miller is an extreme sports athlete and BASE jumper who has made over 10,000 jumps. Miller showed Jensen some of his wingsuit experiences, and the two talked about how they could pull off the same kind of reality in the virtual world. Jensen wants not only to replicate but to improve the hyper-reality experience he was a part of at THE VOID. He remembers sitting at the exit of the virtual reality experience at THE VOID and seeing people's faces as they came out of the experience. Source: JUMP. “You come in as an adult on one side and a kid on another side,” Jensen said. “Once you do that, there’s a convergence and you build a memory on that level because there are so many inputs.” People came out in awe of what they could experience and felt renewed as if they could do anything. Jensen is working with neuroscientist Adam Gazzaley at the University of San Francisco. The two worked together on a program called “Sensync,” a virtual experience for wellness to reduce stress and give Deep Brain Massages. In THE VOID, Jensen felt like people were entering into their flow states for the first time. According to psychologist Mihaly Csikszentmihalyi, a flow state is a state of consciousness where people experience enjoyment, creativity, and life engagement. Jensen expressed that people in their daily lives spent so much time disengaged and worried and were always searching for other things. Virtual reality was a tool for many to enter a flow state. JUMP is planning to make its first appearance in a major entertainment facility in 2021. With COVID, the opening landscape is a bit unpredictable. Jensen says that the extreme stunt experience has no safety concerns besides what already exists for virtual reality simulations. On JUMP’s advisory board includes Miller, John Gaeta, an Academy Award-winning designer, who worked with The Matrix trilogy, Jim Shumway, a senior project manager and rigger for Cirque du Soleil, Luke Atkins, a professional skydiver and BASE pilot. Atkins is the first person ever to dive from mid-tropospheric altitude and land without a parachute. Instead, Atkins landed into a series of nets 20 stories high. Atkins did the jump and completed it even after a dummy crashed through the net, suggesting the jump would be unsafe. As for concerns about virtual reality's affordability, which is too expensive for most people, Jensen believes virtual reality is this generation’s version of the television. He says that the industry should not be giving people virtual reality headsets, that the gap hasn’t been filled yet between educating people and building content. Like people went to see movie pictures long before televisions were ever released at home, he believes that virtual reality must show its utility to the public before virtual reality can be perceived as useful and accessible. “You’re showing them what VR can mean for them — it is the highest amount of technology and availability you can give to people,” Jensen said. At the end of the day, Jensen wants the public and the industry to re-imagine virtual reality as a tool for self-discovery, not merely as an entertainment experience for gamers. “For the people that would say, ‘I would never do anything in virtual reality because I don’t play video games,’ I hope after experiencing JUMP, they will say, ‘I would’ve never guessed that a virtual reality experience would change my perspective on life,’” Jensen said.
https://superjumpmagazine.com/meet-the-man-who-wants-to-reimagine-virtual-reality-353c9b9631ed
['Ryan Fan']
2020-10-16 21:05:53.872000+00:00
['Gaming', 'Product Management', 'Makers', 'Interview', 'Technology']
602
Blockchain Decentralization: The Rise of Women’s Sports and how it can help the fight for equality
Serena Williams is now one of the most recognised sporting names in the world. The successes that she has accrued would make any sportsman jealous. She is fortunate that the women that have come and gone before her fought for equal prize money and she is able to generate the same level of commercial success received by male players such as Roger Federer and Rafael Nadal. Prize money equality is a topic we will cover more in a future article to our blog posts but it is worth mentioning how tennis has finally caught up in the fight for equality. In terms of recognition, equality in tennis is fairly recent. Martina Navratilova is the most successful grand slam tennis player in the whole of the Open Era, yet barely gets mentioned when discussing ‘the greats’. Even in the last decade, before Andy Murray won Wimbledon, the media would say that no British player has won Wimbledon in the Open Era and the last to win Wimbledon was Fred Perry. This isn’t true, in 1977, the Queens Silver Jubilee, Virginia Wade shocked the world as she triumphed and became the first British tennis player to win Wimbledon. The media did not recognize the level of the women’s game and men’s game being the same and often Wade’s incredible achievement gets ignored. Even as recently as the 1990s, famous female tennis players were not recognised on their sporting achievements but more on the ‘way they looked’. Anna Kournikova became a ‘symbol’ of the sport and arguably one of the most famous names but this was not only based on her sporting success. Steffi Graf and other women at the time who were dominating the sport through achievement became undervalued and underappreciated due to world’s interest in the way female tennis stars were looking over their performances. This seems to has swung advantage to equality, today the women’s games receives as much media coverage as the men’s and the female tennis players are getting the same level of publicity for their court achievements as ever before. The Olympics has also helped women’s sport. The rising success of Team GB and Team USA in particular, has lead to a new generation of Olympic watching youngsters. They are watching all the sports equally and equestrian is getting as much media coverage as the athletic track events. This has led to a new fan base of sporting women from all over the globe, those who have as much interest seeing how Jessica Ennis-Hill would do as they would seeing Mo Farah. Those who care to see how well Rebecca Adlington would swim as well as Michael Phelps. Bringing a new level of fans to the women’s games is the strongest tactic to help improve the equality of the sporting industry. Globatalentis a project that has arrived which will bring a next generation of fans to all sports by using blockchain decentralization. Globatalent provides everybody an opportunity to invest in their favourite sports stars and thus giving the athletes extra financial and moral support. This can do wonders for the women’s game. Projects like Globatalent do not see a difference in the men’s game and the women’s game and thus helps us all in the fight for equality. Using blockchain decentralization in the sports industry is a way of using cryptocurrency to securely and safely invest in the future athletes who may not necessarily have had a chance and also to these women who are not represented in the media as strongly as the men. It gives the fans the opportunity to invest in who they want and who they genuinely believe in. This is a powerful tool when used in treating the women’s game on an even playing field. Blockchain decentralization allows fans from all over the world to invest in sports they may previously have never known about. This means that it will bring a whole level of new supporters to the women’s cricket teams, the women’s golfers and all other women’s sports that would never previously have had any exposure. Previous to blockchain decentralization, both men and women who would never previously have had the economic or financial support to stay and play within their chosen sport will now have more opportunity to. This is exciting for the women’s game, this means that we are likely to see more women athletes breakthrough than ever before. As the women’s side of major sports is not as prominent as men’s, there is an even higher percentage of young female athletes who will give up before they had their chance to even get started. Blockchain decentralization will give the women a chance while also promoting the importance of their chosen sport. This means that all the women who would’ve previously quit at the first hurdle will continue within their sport and their sports will become even more competitive. The world has come a long way since the Suffragettes but we are still far from a world of complete equality within the genders. In the sports industry, blockchain decentralization can definitely start the journey for a rise of new equality and prominence within the women’s game. A journey that the whole can world can be a part of. Rob Spitz — Globatalent Press Manager Our Community Channels: Telegram: https://t.me/globatalent Our Social Media: Facebook: https://www.facebook.com/globatalent.official/ Twitter: https://twitter.com/globatalent Linkedin:https://www.linkedin.com/company/18286680/ Instagram: https://www.instagram.com/globatalent/ Medium: https://medium.com/Globatalent Bitcointalk: https://bitcointalk.org/index.php?topic=2690003 Youtube: https://www.youtube.com/channel/UCD9YH3-Stofoh0eXWxwVBnA
https://medium.com/globatalent/blockchain-decentralization-the-rise-of-womens-sports-and-how-it-can-help-the-fight-for-equality-611fd94fbf85
[]
2018-02-21 11:05:58.785000+00:00
['Investment', 'Blockchain Technology', 'Investing', 'Cryptocurrency', 'Decentralization']
603
ZoidPay Founding Members — The Story Behind It
After over 1.5 years of working on our product, it would be natural for us to feel proud of what we do and have achieved. Success requires that we raise our awareness, and since pride is a rather low-frequency emotion, we’ve naturally surpassed that and all that is left is the joy of watching our ideas manifest and improving people’s lives. We are sure about this assertion because we wanted to have our products ready and functional for the people to use and adopt right away. The number of sources, contacts, feedback, and suggestions on how the product needs to look, what features does it need to have in the first stages, etc., contributed to the development of an app that is tailor-made for the economic needs of modern consumers. With the risk of being repetitious, all great ideas are carried out by people. The people that stand beside us wanted to be involved as much as possible in ZoidPay. In addition, the traction that we gained with the contribution of our community members, who loved the idea since the beginning and saw its potential was the vehicle that brought us to this day. A day when we can say that our core team is stronger than ever. A suite of skills that all work on the same principles.
https://medium.com/zoidcoin-network/zoidpay-founding-members-the-story-behind-it-bab703cce502
['Ştefan Alexandru Băluţ']
2019-10-20 13:14:23.804000+00:00
['Technology', 'Startup', 'Payments', 'Cryptocurrency', 'Blockchain']
604
Core Banking Software Market Size Worth $16.38 Billion By 2027
Core Banking Software Market Size Worth $16.38 Billion By 2027 The global core banking software market size is expected to reach USD 16.38 billion by 2027, registering a CAGR of 7.5% from 2020 to 2027 according to a new report by Grand View Research, Inc. Core banking software and services are seeing an increased rise in demand as they enable customers to access their bank accounts and undertake basic transactions from any branch office of their bank, among other benefits. Core banking is often associated with retail banking, with many banks treating their retail customers as core banking customers and managing businesses via their corporate divisions. The advent of telecommunication and computer technology is allowing businesses to share banking information with bank branches efficiently and quickly. Moreover, banks are focusing on moving to core banking applications to support their banking operations via a Centralized Online Real-time Exchange (CORE) of transaction data. Financial institutions and banks are adopting core banking software as it enables them to facilitate decision making through real-time reporting and analytics. Large financial institutions are focusing on implementing their custom care core banking systems. Additionally, credit unions and numerous community banks are outsourcing their core banking systems, thereby driving the market growth. Large financial institutions and banks are increasingly realizing the need to focus on ways of achieving customer delight, thereby creating growth opportunities for the market. While the market is expected to witness steady growth in the near future, the COVID-19 pandemic is anticipated to adversely impact the market to a certain extent. However, the increasing demand for managing customer accounts from a single or centralized server is expected to fuel market growth. Increasing investments in core banking system updates to handle a growing volume of product-channel banking transactions is anticipated to propel the market growth over the forecast period. Click the link below: https://www.grandviewresearch.com/industry-analysis/core-banking-software-market Further key findings from the report suggest:
https://medium.com/@marketnewsreports/core-banking-software-market-65a3091ee610
['Gaurav Shah']
2020-12-30 11:07:36.630000+00:00
['Technology', 'Artificial Intelligence', 'Big Data', 'CRM', 'Banking']
605
Tips To Reduce Your Cost Of Web Development Without Losing Quality
Tips To Reduce Your Cost Of Web Development Without Losing Quality Kan Smith Feb 22·4 min read The first thought to pop up in the mind when it comes to web development is the expenditure involved for the same. Though the whole task is mainly carried out on a computer, the customization and amount of time required to make it an expensive project. Therefore, even the top web development companies in world needs to be super meticulous when it comes to budgeting decisions regarding web development projects. Here are a few tips to reduce down the cost of your web development projects without compromising the quality of development. 1. Outsource The most important benefit of outsourcing web development projects is their cost-effectiveness. The cost of hiring highly trained and certified professionals, creating a workplace, supervision, and other physical requirements are entirely eliminated. And since an outsourcing firm specifically deals with development, the quality of the work remains supreme. The parent company can then focus on branding and marketing instead. There is also the benefit of flexible location. For example, Chicago web development companies can outsource to East European companies where the cost of labor is low. Hence, outsourcing can reduce costs drastically. 2. Use an Already Made Template A website developer in New York or anywhere else can spend a significant portion of time and money on developing a template for your web development project, and still not get the right design simply because of budget and technological constraints. You might ask what the solution is then. Well, it is a pre-made template! Thousands of templates have been built to meet almost every company’s needs. WordPress itself has a vast number of themes that can be used by entrepreneurs. The use of a pre-made theme eliminates a great deal of design work involved in launching a new website. There are a plethora of online markets where a fully prepared theme can be purchased. Several places to consider are Elegant Themes, Template Monster, and Themify. me, and yet again the location is no limitation, i.e., whether it is a website development company in Atlanta or a company in Europe, everyone can use this technique. 3. Choosing The Right Platform One more way to reduce the cost of web development is to select the best platform for your development process. The marketing budget would impact the platform solution, however. Conduct consumer research to select a platform, it may be Android or iOS or cross-platform. Try to figure out what the public’s dominant channel is. Targeting user requirements will assist your company with issues such as age, place, gender, income level, attitude, and lifestyle, making the web development procedure much cheaper. Most Chicago web development companies advise businesses to choose cross-development projects. 4. Reduce The Web Pages It is preferable that a company reduces the total number of web pages in the initial phase of its business. The reason is that, regardless of the home page, secondary page, or tertiary page, every web page requires effort, time, resources, graphics, images, positioning, and much more. That means, it directly calls for more expenses. So, it is important to ensure that in the initial stages, you focus on building a loyal customer base and then increase the content gradually. For instance, you can associate with a web development company NYC or another country to build a customer base that is not only local but goes beyond that. This would save a lot of your money. 5. Build A Minimum Viable Product (MVP) Most CEOs for top tech brands believe that companies that fail to serve the target audience on time with what they require will never succeed regardless of how advanced their product is. The same applies to web development. The best way to reduce the cost of your web development project is to follow the 80/20 rule. Just 20% of the features would solve 80% of the problem. Identify them, build them, and start testing them with actual users as soon as possible in the real world. The sooner a business gets feedback on its 20 percent, the better the final app will be. Therefore, with spending only the scrape of the total budget on recruiting a top web companies or another, you can find out the needs, patterns, and preferences of the potential customers with only an MVP and also keep the costs under control. Wrapping Up: With this, we have looked at the five cost-effective solutions for all kinds of businesses seeking to develop their websites and achieve the best results. With digitization taking over the whole world and commerce operations going virtual, it is extremely important for a business to have an online presence in the form of a website or an app, or both. However, none of these options are cheap, and hence reducing cost without losing quality should be the primary aim.
https://medium.com/@topappfirms/tips-to-reduce-your-cost-of-web-development-without-losing-quality-4af755177ffa
['Kan Smith']
2021-02-22 11:36:42.240000+00:00
['Website Development', 'Web Development', 'Technology', 'Web Design']
606
Voting on the Blockchain
Photo by Anthony Garand on Unsplash Voting is a major potential use case of blockchain technology. In most democracies, there are active debates on topics such as increasing voter turnout, decreasing voter fraud, and increasing public trust in the democratic system. Ironically, the introduction of electronic voting in many countries has prompted underperforming political parties to blame potential hacking or rigging of the electronic system, and to demand a return to completely paper ballots. Bringing voting to the blockchain would allow for transparency in the system while protecting voter privacy. Election monitors and political parties would have a common, tamper-proof record of all votes that could be audited for fraud quickly. A Blockchain-Powered Voting System To submit a vote on the blockchain, an election officer would need to verify a voter’s eligibility to vote. This step would most likely take place off the blockchain (to protect privacy). Once a person’s eligibility to vote is verified, the voter would receive a randomly generated key or token that would allow them to vote. Since each key is unique, the voter would not be able to vote twice (neither in the same voting booth, nor in a different city or state). A Secure System On a voting blockchain, a specific number of votes would form a block. The blocks would be chained together using cryptography, so it would be impossible to alter any vote in the chain without also tampering with other votes in the chain. Additionally, the distributed nature of the system would mean any tampered copy would be rejected by the other nodes. The system would be tamper-proof as long as bad actors did not acquire over 50 percent of the nodes to tamper with the data (and stage a so-called “51 percent attack”). If the blockchain is private, the distribution of nodes (at voting booths) would be overseen by an election authority, and a 51 percent attack would be impossible. If the blockchain is public, the designer would have to take precautions to promote decentralization, and actively monitor against accumulation of massive computing power on the blockchain. Blockchain-Enabled Voting Efforts Around the World Japan Last month, the Japanese city of Tsukuba introduced a blockchain-enabled voting system for its citizens to vote on local social development programs. Voter eligibility was determined by scanning a social security card and entering a password. The vote was generally successful, though several residents were not able to verify their eligibility because they forgot their passwords. Switzerland The Swiss region of Zug, which has emerged as a hub for blockchain-enabled solutions, tested blockchain-based voting in July. For the trial, citizens of Zug were asked to vote on a fictitious consultative issue. Each user signed in to vote via an electronic ID system on their smartphone. Though turnout was understandably low, the vote was a success. United States West Virginia tested blockchain-based voting for military personnel who cannot vote in person. The test involved a very small group of people casting votes for the May 2018 election primaries via their smartphones. Voter eligibility was first verified remotely through a federal or state ID. Voters were then able to cast votes using their smartphones. West Virginia’s pilot program was conducted in partnership with Voatz, a venture-backed Boston startup offering blockchain voting solutions. Sierra Leone In Sierra Leone, a blockchain company ran a small test of its blockchain voting solution during that country’s 2018 presidential election. Corporate Voting Since blockchain technology is relatively new, governments are unlikely to adopt it on a large scale until issues of security and scalability are fully addressed. Instead, corporations are likely to lead the way in the adoption of blockchain voting solutions for corporate votes. In 2016, Nasdaq tested a blockchain-enabled e-voting platform in Estonia, on its Tallinn exchange. Earlier this year, the Spanish bank Santander introduced blockchain-based voting for its annual shareholder meeting. Developing a reliable blockchain voting infrastructure is a priority for many companies and industry organizations. Blockchain Voting Needs Trustworthy Governance Initially, blockchain voting is most likely to be successful in the countries that need it the least, such as Switzerland. For several years at least, blockchain voting will not be viable in countries that do not have trustworthy election commissions. Though blockchain solutions aspire to be trustworthy because they are decentralized, they ironically need to be put in place by trustworthy organizations in the early stages of their adoption. Conclusion Though voting is an excellent use case for blockchain technology, elections are rightfully a sensitive area for the introduction of a new technology. Blockchain infrastructure is being rapidly developed in many industries and for numerous use cases around the world. Large scale voting on the blockchain will become possible once voting pilots have successfully tackled security and scalability issues and engendered public trust.
https://medium.com/blockstreethq/voting-on-the-blockchain-b49bf8c9e8c
['Shaan Ray']
2019-12-15 07:00:44.343000+00:00
['Cryptography', 'Blockchain', 'Blockchain Technology', 'Bhq Contributors', 'Voting']
607
On the cards: how will blockchain space evolve in 2021?
With New Year just a few days away, it is once again time to reflect on the previous 12 months and look ahead to the next year. Making predictions for the future of the ever-changing blockchain space is a difficult task, but with enough experience and knowledge about the industry, spotting emerging trends becomes easier. With that in mind, we turned to our resident blockchain whizzes — LimeChain co-founder George Spasov and leading blockchain architect Daniel Ivanov — to learn how they see the DLT space evolving over the next 12 months. Here’s what they had to say. DeFi maturation Twenty twenty has been a challenging year in many respects, but there have also been some major developments in the blockchain space. For example, this was the year that saw decentralized finance exploding onto the scene. Encouraged by DeFi’s potential, many talented developers jumped on the opportunity to experiment with the concept and develop dApps aimed at democratizing finance. Those early efforts helped establish prominent DeFi services, such as Uniswap, Compound, and Yearn.finance, which introduced people to new ways to earn money and access funds. However, they’ve also produced some less promising results. According to Dani, the DeFi concept will be fleshed out and the space will mature over the next 12 months. The lessons learned during this year’s largely experimental phase in DeFi’s development will be applied to future projects in the sector. And as developers gain a deeper understanding of DeFi’s potential and limitations, never-before-seen services will continue to emerge. This year saw the emergence of yield farming and flash loans, which were only made possible through DeFi applications. Dani expects new and exciting DeFi services to pop up in 2021. Meanwhile, George points to the variety of existing DeFi dApps, noting that the majority of popular products have their own takes on the DeFi concept, which makes them suitable for different applications. For that reason, George doesn’t think that we are likely to see consolidation in the DeFi space anytime soon. Instead, he expects that the DeFi landscape will become even more diverse over the next 12 months. Another major development was the beginning of Phase 0 of the Ethereum 2.0 project which was marked by the launch of the Beacon Chain earlier this month. The Beacon Chain will introduce proof-of-stake to the Ethereum protocol. It is also an important first step in introducing chain shards, because they require staking to work securely. “ The launch will likely prompt exchanges and other major ETH holders to start offering staking services aimed at people who want to participate in the Beacon Chain, but do not have the crypto to do so, “ Dani tells us. On the technical side, 2021 will likely see Eth 2.0 growing and evolving through the launch of testnets. Central bank digital currencies Considerable progress has also been made this year with regards to central bank digital currencies, with the People’s Bank of China launching a pilot for its digital yuan project in four major Chinese cities. With the pilot having gained some traction, the viability of CBDCs has become more apparent. Dani explains why these could be valuable instruments for central banks: “ Central banks currently rely on commercial banks to roll out their monetary policies among the general population. In CBDCs, central banks will have tools to directly interact with the end consumer. These instruments will also enable central banks to quickly allocate funds, for example, financial aid for the most vulnerable or badly hurt during a crisis. “ While it’s difficult to predict whether CDBCs will be among the leading trends in 2021, Dani is convinced that there will be major developments in that space in the next five years. Growing interest from governments Initiatives such as CBDC research and development are indicative of the growing interest blockchain is drawing from governing bodies around the world. Multiple countries across Asia, including China and Singapore, have been quite active in their efforts to utilize the technology and become leaders in that space. Closer to home, the European Union has identified blockchain development as a key part of the bloc’s strategy for building a digital economy. The launch of the European Blockchain Services Infrastructure (EBSI) — a network of distributed nodes aimed at delivering cross-border public services across Europe — reflects the EU’s significant interest in blockchain. Our colleagues expect 2021 to provide further evidence of the EU’s growing commitment to blockchain adoption. From PoC to MVP This year, the enterprise sector again showed hesitation to move past the proof-of-concept stage when it comes to blockchain solutions. The positive here is that the knowledge gained as a result of all that experimentation and prototyping could embolden businesses to take the next step and start developing minimum viable products and even more sophisticated solutions. George expects businesses to start seeing blockchain primarily as a valuable tool for building solutions connecting various organizations across a single supply chain. The technology’s potential in supply chain management is already widely recognized and it’s time to be actively explored. The next year could see major developments on that front. In contract, George anticipates that there would be diminishing interest in blockchain consortia. He argues that competing businesses have so far seen negligible benefits from joining forces to form such networks. The journey continues Being only a few days away from the New Year, we cannot help but feel excited about the future of blockchain. History has shown that no matter what happens, the sector continues to evolve. One chapter ends, another begins. The blockchain journey continues, often in unexpected, but exciting new directions. Our experts anticipate that the blockchain sector will find new ways to surprise us in 2021 and beyond. Perhaps the growing interest in non-fungible tokens would spark a boom in the fledgeling market for digital art and memorabilia. Or maybe the video game industry would embrace digital tokens to fuel a new breed of virtual economies . We cannot wait to see what the future holds. Here’s to 2021 and another exciting blockchain ride!
https://medium.com/limechain/on-the-cards-how-will-blockchain-space-evolve-in-2021-6047816949c0
['Dimitar Bogdanov']
2021-02-18 10:04:24.445000+00:00
['Blockchain', 'Blockchain Development', 'Blockchain Technology', 'Distributed Ledgers']
608
‘This Time It’s Different’
Retailers have to engage with the needs and desires of the consumer and adapt to their behaviour. The opportunity to do this online is measurable, data-driven and creative — it offers more opportunity than any other channel. Customer loyalty, retention and improving the online experience were all paramount to the success of eCommerce. The GFC brought with it an industry-wide decline in retail sales, both online and in-store. Consumers had less credit available, meaning people were forced to save more and so the reduced consumption expenditures meant a slowdown in consumer spend. Consumer confidence became a central problem. But still, trends began to emerge, dividing the online and traditional retail markets. In-store, customers were deterred from impulse buying, eliminating room for unnecessary purchases. As a result, consumers took time to research online and find the best deals. Constant price comparisons were the result. This has not changed today. Crisis or not, people are hooked on the ability the internet gives them to find the best deals. 91% of online shoppers use the internet because researching products online makes them feel more confident about their purchases. The same percentage of buyers also announced that comparing prices online reassures them that they are getting the best deal. People used the internet because it didn’t feel like an impulse purchase. In 2003, the SARS outbreak dramatically changed consumer behaviour. This idea of empowering the individual in a time of crisis, controlling when and what to purchase and how it’s distributed and delivered, comes with eliminating as many variables as possible. Even during a weak economy or public health crisis, people continue to use the internet to shop. The internet provides a comprehensive set of information related to a particular product or service that is not always available in store… Nor is that information, price and availability limited to a single retailer. In a time where markets become increasingly challenging they also become more competitive, digital marketing and the internet allow you to break into new markets and retain existing share. The emerging trend of avoiding going in-store is here. As Governments around the world attempt to contain the spread, choices become limited. It is going to get worse before it gets better. In the USA Nike, Apple, Microsoft, Abercrombie & Fitch, Urban Outfitters, Glossier, Patagonia and LuluLemon are all closing their doors. Online stores are always open and now is no exception. Retail behemoth, Amazon, reported they have already seen significant spikes in activity on their site. Scrambling to hire an extra 100,000 team members to meet warehouse and delivery capacity requirements. They are even being accused of putting profit before safety by warehouse workers in the UK as they are asked to work overtime. eCommerce is absolutely roaring as a result. To quote Mark Dolliver at eMarketer: Since older individuals are the ones for whom the virus has been most fatal, they may be especially likely to alter their behaviour, which could mean a more widespread adoption of eCommerce, an area lacking for the demographic. In the US, eCommerce is booming. The search count for health products and groceries has surged. Moreover, digital shoppers are more willing to convert on products they need despite longer delivery times. Meanwhile, in the UK, there are reports that eCommerce purchases could increase to as much as 40% of total retail spend, up from just over 10%. Not all shall prosper, but opportunities will begin to present themselves. Especially in the world of online. Here at Particular Audience, our portfolio of clients across France, USA, UK, Australia and New Zealand have demonstrated some clear winners in the Pharmacy, Groceries, Luxury, Apparel, Furnishings and Homeware verticals. There is great optimism amongst objective risk mitigation. Retailers selling social products, including party supplies, are having a tougher time. It’s been inspiring watching their rapid shifts toward product diversification and the quick pivot on marketing messages as they adapt with the time. It will be interesting to see how fast fashion is effected, pureplay online retailers are doing well and we haven’t seen much of a shift in behaviour for those with physical stores at this stage perhaps as a result that the economic buyers have little concern for the overall economic situation, and even with a reduction in buying outfits for events, the reallocation of spend from bricks to clicks seems to be holding up online metrics. Focus on cash flow generating investment, with transparent performance analytics and strategic collaborations with specialist vendors is imperative. The greatest risk to retailers right now is cutting valuable investments under the illusion that they are expenses. Vendor consolidation in areas that benefit from data network effects such as data mining, AI and machine learning may be an astute move with the right partner. Everything outlined above regarding the changing face of the consumer is an ecosystem of demand data, benefiting from great data network effects, and should drive every single investment decision a business makes. Keep On Keeping On During the GFC, there was a 4.3% increase in profits during the recovery period for those businesses who invested in marketing and technology. Conversely, those who cut marketing spend saw a negative return of -0.8% in profits over the longer recovery period. But what about profitability during the downturn? During the GFC, a study of 1,000 firms in the UK revealed that they maintained average profitability of 8% if they increased their marketing spend during the downturn. Guy Consterdine, a marketing and research consultancy commissioned by the Periodical Publishers Association of Ireland (PPAI) after the GFC summarises this in saying that “contrary to most marketers behaviour, the evidence clearly shows that it pays to maintain advertising expenditure in an economic downturn.” A recession actually provides opportunities for marketers, for it is a chance to invest to gain market share and market leadership, and attack timid rivals. This can also improve the stock market valuation of the company. To quote Peter Fader of the Wharton School: As companies slash advertising in a downturn, they leave empty space in consumers’ minds for aggressive marketers to make strong inroads. Price promotions can also be tempting in a downturn, and are widespread in consumer markets, but they are likely to damage not only profits but also brand values. Brand values that are impaired in bad economic times will be hard to restore when the economy expands again. Channel management is central to optimal response in changing consumer trends. Slashing prices and desperately trying to attract consumers in places where they won’t actually be is a bad strategy. Retailers with physical assets must be swift in mitigating solvency risks, leverage physical locations for cost optimisation and convenience. Retailers have access to the technology, tools and resources to assist with this now more than ever. Cashflow positive technology investments and those that can reduce operating expenses and labour costs are in focus. Where Are Our Machine Overlords When You Need Them? Artificial intelligence is new to most retailers since the GFC, the opportunities here include: Detailed analytics and decisioning systems Prediction and experience optimisation Cost-cutting and resource automation Now is the time to consolidate legacy systems with cutting edge third-party technology vendors who will let retailers maximise every opportunity where opportunities are less frequent, and that will turn your retail stack into one that will emerge from this downturn as a market leader. With the increase in website traffic for retailers it will pay to investigate how big data and machine intelligence can make you more defensible in a downturn, memorable, reliable and a winner in the rebound. Leveraging data for predictive models will help retailers forecast sales, as well as allow retailers to react effectively with behavioural-driven recommendations to make discovery onsite intuitive. This is not capitalising on downward trends or profiteering from a crisis, all in all, decisive measures need to be taken to assist your target market and keep you in business. In fact, if you are short on resources. Investment in optimisation and automation during this time of uncertainty will be what will keep you on track when the rebound comes too. Servers scale, untrained staff tend not to. This includes inventory allocation optimisation, visibility on attribute trend analysis, geographic demand gauging, intuitive and automated experiences both online and instore. Investing in technology has never been more important, the role of the digital team becomes incredibly important especially as human capital comes under fire elsewhere in the business. We are also witnessing increased interest in robotics and warehouse automation across even mid-market retailers counteracting human capital risk. Ocado, for example, is weathering the current storm beautifully: Ocado Share Price Vendors who eliminate business costs will assist your business in times of uncertainty. Automation itself serves to fulfil this purpose to some extent, but look for flexible billing models such as pay-per-performance models that offer business owners confidence in times of uncertainty. The Harvard Business Review concludes,
https://medium.com/particular-audience/this-time-its-different-6de153021fd5
['James Taylor']
2020-03-19 10:47:18.253000+00:00
['Coronaviruses', 'Retail Technology', 'Retail', 'Recession', 'Ecommerce']
609
Encouraging the Wrong State of Mind for Creating Good Code
Encouraging the Wrong State of Mind for Creating Good Code Your company is doing it and you are enabling it You can read this article on my own blog, if you prefer that to medium. Over the last year I’ve started noticing this strange pattern of how innovative ideas come to me when working on a project. Innovative — as in solving a problem in a creative way that makes the solution much easier to implement, much less fragile, or both. It seem to me that innovative ideas emerge in two states of mind: bored and stressed. I’m going to focus on how this applies to people’s professional work, rather than their hobby projects, because that’s where employees and manages shun these two states of mind most. Boredom and stress are both, to some extent, external stressors for your conscious thinking. The simplest way to see this is to think about how you feel if you are overworked, anxious, or bored for a long time: you feel bad. But external stressors aren’t all bad. The dose makes the poison. In recent years, medical researchers have become more and more interested in the idea of hormesis, the long-term positive adaptation of our bodies triggered by short-term, non-deadly stressors. The best example of the hormetic effect is exercise causing a short-term raise in cortisol, but actually lowering cortisol levels overall during that day. Similarly, exercise induces oxidative stress (a factor in a host of problems ranging from dementia, to cancer to arthritis), but your body reacts to said stress by producing enzymatic antioxidant(mostly superoxide dismutase, catalase and glutathione peroxidase) , it actually leads to lower overall oxidative stress. Other less proven examples of hormesis are short-term exposure to extremely hot or cold temperatures, fasting and even low dosages of ionizing radiation and alcohol (ethanol). But this “what doesn’t kill you makes you stronger” principle also applies, I would argue, to your thinking. When you are in a bad state of mind, you are forced to think about getting into a good state of mind and your brain starts getting creative. On Boredom The benefits of boredom for creativity are probably obvious for most of you. If you have nothing to do for extended periods of time other than reading or procrastinating, ideas will start flowing through your mind. In this context, boredom also implies that there’s no state of urgency in regards to whatever you are working on — everything is going smoothly; sure, there might be a small issue here and there, but your 8h/day job really only takes about 10 minutes a day to do. Much like sleep, boredom allows your subconscious to take over for you, to do the heavy lifting in terms of ideas. It also allows your brain to restructure information. Boredom is also nice in terms of allowing you time to detach yourself, it allows you to see the bigger picture. If you are focusing on a single tiny bug, your whole life becomes the parts of the code where you are tracing the bug. If you are thinking about predicting where the next tiny bug will emerge, your mind will switch to a wider perspective of the code and data you are operating on. Furthermore, boredom provides you with a hunger to spend all that free time on something. If you are constantly bombarded with tasks, you don’t necessarily feel the need to plan a 40 hours refactoring adventure, but if you are extremely bored, finding something difficult to do might seem like a great idea. The number of times I went from bored to “I have an amazing idea” are countless. On Stress One more controversial claim I will make is that stress is also a great helper in regards to creativity. Let’s say that the project you are working on is burning, you’ve been fixing issues or adding critical features for 10 to 14 hours a day for the last 5 days. You won’t necessarily have a “detached” view of the project, but you will be so familiar with every line and every snippet of code that you won’t need to detach yourself, because your mind can already navigate the project at any level it desires. You won’t have the energy to involve your subconscious while awake, but I guarantee you that you will be thinking about the project in your sleep. Most importantly, you are in a situation where you are forced to think creatively, because you either find a way to make your job easier by being innovative, or you soon won’t be able to handle it. This is an uncomfortable position to be in, for sure, but your brain is made for these high-stakes type scenarios. Many of us become creative and motivated only when we have skin in the game, when our head is on the chopping block. You might not want to do that 40 hours long refactoring, but if it’s the only way to get things running decently again and stop issues from pilling up, it will seem like the best way forward.
https://george3d6.medium.com/encouraging-the-wrong-state-of-mind-for-creating-good-code-531031517881
['George Hosu']
2019-05-04 10:42:49.911000+00:00
['Software', 'Programming', 'Software Development', 'Technology', 'Psychology']
610
Three Worst Technologies Ever
There are hundreds of technologies releasing every single day to make our life easier, however, not everything can be good and the same goes with technologies. While all the genius people spend their days and nights developing useful technologies, sometimes these technologies become a curse on mankind. Today, we are going to talk about some of the worst technologies in the world. Hacking Software While invading into a criminal’s online platforms can help authorities to make a wise decision, this tech advancement can ruin a life of an ordinary person. With hacking software, people lose their all hard-earned money from their bank accounts. Moreover, all the private data such as pictures get leaks anywhere in the world. E-cigarettes Smoking is injurious to health and we all know that but technologies such as vapes attract youth and they get addicted to this killer in no time. The selling of e-cigarettes is banned in various states but such items still trade illegally and increasing the dangers of our kids. Robots The future might be easier for the people who can afford robots and machines but the hard workers who solely depend on laboring will not be able to survive. In the future, robots are going to take over the human workforce and that will be alarming for poor habitats.
https://medium.com/@elysia-12terrance/three-worst-technologies-ever-cdaf4236de09
['Elysia Terrance']
2021-02-23 06:50:28.692000+00:00
['Technology Trends', 'Technology', 'Technology News']
611
A Day at Nava as an Infrastructure Engineer
It’s hard to describe what a typical day at Nava is like because no two days are ever the same. Every day brings new projects, challenges, and opportunities. So we’re sharing just a glimpse of what work is actually like for different Navanauts. Here’s what Wei Leong, an infrastructure engineer, has been up to lately. Infrastructure engineer Wei Leong working from home. Current projects I’m working on the Medicare Payment System Modernization (MPSM) project. MPSM modernizes the Medicare payments system. This system affects approximately 57 million people who depend on Medicare and processes over $500 billion in claims each year. It’s a vital service. But the current system is about 40 years old, runs on millions of lines of COBOL and assembly language, and is deployed to mainframe computers. Our job is to build a platform that enables different teams to deploy and operate their services using modern tools such as Terraform, Docker, and Amazon Web Services (AWS). Right now, we’re in the midst of two migrations: in one, we’re updating our infrastructure configuration from Terraform 0.11 to 0.12; and two, from Elastic Container Services (ECS, an AWS service) instances to AWS Fargate. We’re also working on standing up infrastructure for public APIs that we plan to roll out in the future. It’s an interesting project because it provides the opportunity to work on some legacy technologies that aren’t common today. You don’t get to integrate decades-old and modern technologies very often. On the day-to-day I tend to get a lot of my coding/implementation work out of the way in the mornings. Because I’m on the East Coast — our team is distributed across the country — I start the day earlier than most. It’s a good time to do more quiet, focused, solo work. We operate on two-week sprints. On the last day, different teams within the MPSM project get to demo what they’ve been doing over the last couple of weeks. Recently, I demoed a deployment pipeline using a Jenkins shared library that our team has been working on. The goal of these demos is to share one or two highlights with stakeholders, including people from government and other contractors on the project. Favorite part of the day My favorite part of the day is usually in the afternoon, when everyone is available to either pair on certain tasks or collaborate on system design and research projects. A lot of our collaboration is done over Google Meet, Zoom, or even Slack messages. If I’m stuck on a problem, someone’s around to help me resolve it because everyone, regardless of where they are, is available during this time. It also enables me to better understand what other people are working on. On working from home during the COVID-19 pandemic I usually work out of the Washington, D.C. office, but like everyone else, I’m working from home right now. My partner and I rotate childcare responsibilities every few hours, but we don’t have a specific schedule. (We had one but it just didn’t hold up.) My team’s been very understanding and accommodating of my (sort-of) chaotic schedule. I just make sure to communicate when I’m available and when I’m not. Usually, meetings or discussions can be moved to accommodate. Join us Nava is a talented and diverse team working holistically across engineering, research, design, and operations to positively transform how people experience government. And we’re hiring. Browse open roles to learn more.
https://blog.navapbc.com/a-day-at-nava-as-an-infrastructure-engineer-bce5288db7a
['Nava Pbc']
2020-07-16 17:10:57.124000+00:00
['Meet The Team', 'Civictech', 'Jobs', 'Government', 'Technology']
612
Could Carbon Save The Planet?
Could Carbon Save The Planet? A new miracle material could turn the tides on global warming. The world faces a crisis, one propagated by a single element, carbon. Humanity is pumping out record levels of carbon dioxide, methane, plastics and pollutants, all of which use carbon chemistry at their core. These chemicals are heating the planet, destroying our oceans and wrecking the land. But could a revolutionary new use for carbon help us turn the tides on the environmental catastrophe we have created? We basically have two problems here, our waste polluting the environment and our means of generating energy. Our modern comfortable lives depend on the use of a vast amount of energy and the cheap supply of mass-produced goods. Both of which pump out carbon-based pollutants. So we need a way of reducing the carbon waste in the environment all-the-while keeping our sources of energy high enough to fuel our modern lives. Recent innovations like electric cars that are actually useful, vast wind farms, solar power stations, recycling and biodegradable products have taken us closer and closer to climate nirvana. But these have two big flaws, the batteries needed to hold the energy are massively damaging to the environment (though an electric car is still better than a petrol one) and none of this actually reduces the carbon already out there in the wide world. But a new battery technology could change all of that. Imagine if you could recycle plastic or pull carbon dioxide out of the air and turn it into a battery that would outperform anything in use today. This is the future promised by Graphene. Digram of graphene — Pixabay Graphene is simply carbon arranged in a flat honeycomb lattice a single atom thick. Sounds simple, doesn’t it? In fact, Graphite is lots of fragmented layers of Graphene on top of each other. So you will have some of this wonder material in your pencil case. But Graphene has some rather amazing properties that mean it can create a battery revolution. It is a zero-gap semiconductor; this basically means it conducts electricity incredibly well, better than any metal. But it also ‘hangs onto’ the Electrons very well, the Electrons can leave the edge of the sheet and form a circuit, but they find it hard to leave the middle of the sheet. We can use these properties to enhance our current battery technology (still storing the energy in chemical bonds), but if we could make enough Graphene, we could build supercapacitors out of Graphene (storing energy in pure electromagnetic potential). A capacitor is just two plates of very electrically conductive material sandwiched either side of a thin insulator. This means you can pass a change between the two plates, one becomes electrically positive, and the other negative as one gets Electron-rich and the other Electron-poor. Once the maximum charge has been built up, you can use it just like a battery. Normal capacitors are great for dumping small amounts of energy very quickly. The flash on a camera tends to use a capacitor to generate a short bright flash, but it couldn’t power a light for any decent period of time. Bring in Graphene supercapacitors, just like normal capacitors they can dump all their electrical energy very quickly, but they have a much higher capacity. One of these could power a light for a very long time. Even better they only take a few seconds to charge fully! So while a Graphene doped Li-ion battery can now be found in the highest performing commercially available batteries, it is these supercapacitors that will bring along a revolution. Imagine an electric car powered by Graphene supercapacitors. It would be able to charge in a minute or two, power the car for 500 miles or dump all of its energy at once, delivering several thousand horsepower. All while the battery pack weighs less and the batteries don’t degrade over time. How brilliant! But this quick charging and high power doesn’t just mean insanely good cars. One of the big issues we have with renewable energy is capturing it for later. We have very efficient solar and wind power, but our current range of batteries can’t charge quick enough to make the most of the peak power moments. This means loads of the energy generated actually goes to waste. On the contrast, Graphene supercapacitors would be able to keep up with the charging and power demands that we have become accustomed to. Using Graphene supercapacitors to store renewable energy means that we can switch to 100% renewables quicker, with less wind farms and solar power plants. Solar power plant — Pixabay Oh, and on the subject of solar and wind power. Graphene can be used to make ultra-efficient solar panels and generators. So having wide-spread use will also mean that these power plants can produce even more power! It is important to remember that Graphene is non-toxic. We could dump thousands of tonnes of it in the ocean and we wouldn’t see many negative effects — certainly not to the scale that our current technology does. All of the current technology that it could replace (like Li-Ion batteries) require toxic, rare Earth metals. These are killing the environment, are hard to come by and the mines used to extract them have questionable human rights records. Graphene could be made in a way no other rare Earth metal can be, atmospheric extraction. There is currently too much CO² in the atmosphere and too much plastic in the water. We can extract the carbon from these sources, releasing oxygen from CO² and Hydrogen from plastics (which we could also use as fuel). This would mean that the Graphene produced could be massively carbon negative, reducing the amount of carbon in the environment, turning the tides on climate change and pollution! So, why don’t we have Graphene powered smartphones or cars yet? If this technology is so amazing shouldn’t we be pushing for it?! Well, Graphene is a little harder to produce than I have lead on. Our technology to make it right now can produce small flakes of Graphene that can be used to enhance metals, fabrics and sensors at a relatively cheap price. However, making large sheets of graphene, as you would need for superconductors, is not currently possible. After all, this is a single atom thick sheet; we are talking about engineering that is on the molecular scale here. However, there is no reason to say that these advances are impossible. After all, scientists in 2015 discovered a way to make Graphene for 100 times less and just a few days ago NanoTech got $27.5 million in funding to create Graphene batteries. Jumps in technology like this means that your devices may be powered by pure carbon in the not too distant future. So, can carbon save the planet? Theoretically yes it can! There is a way in which we can use the carbon compounds that we have polluted the world with to produce technology far more powerful than we currently have whilst reducing the global levels of carbon. We just need to figure out the logistics of getting these atoms in the right order.
https://medium.com/predict/could-carbon-save-the-planet-229fe4598c36
['Will Lockett']
2020-10-14 21:52:33.178000+00:00
['Environment', 'Climate Change', 'Science', 'Technology', 'Future']
613
6 Mobile App Development Myths with Effective Solutions to Check
The Mobile App Development Industry is increasing at a rapid pace. There is no denying that development of each app is not easy and this is because of the complexity involved in the mobile app development process. Sometimes mobile app developers develop ineffective apps which completely spoils the mobile strategy of entrepreneurs. However, the process of developing an app becomes even more problematic when app development myths lead to bad development implementation decisions. Common Myths and the Absolute Solutions about Mobile App Development Process: Don’t Pay Attention to Wireframes and Prototypes When it comes to mobile app development, you might have come across a tip that wireframes are useless. Thus, there is no use of investing time on them. In addition, it has also come to notice that a mobile app developer can develop apps without prototypes. However, these are all false information. This is because you need wireframes in order to get an insight as to what your app would look like. On the other hand, the prototype will help in providing visual graphics and animation. Code Everything in the Starting You may come across people who will suggest you to code in the initial version of the mobile app. You should not follow this suggestion. You can opt for the Minimum Viable Product, known as the early version of the app. This will help you in knowing whether the app has the feasibility to launch or not. With the MVP, you can compete with your competitors. Lastly, it gives scope to further develop the app. App will Receive Huge Downloads Once it Hits the App Stores While indulging in mobile app development, you should not forget that your app will be one of the millions in the app marketplace. So, everyone will download your app once it hits the stores is a myth. In fact, there is a need to do some serious promotion. One such method includes ASO or “App Store Optimization”. The technique will help you to rank your application the highest in the search in order to bring visibility to your app among the app audience. Sticking to One Plan is a Wise Decision Mobile app development is facing a cut-throat competition and here, the methodologies keep on changing. This clearly means that sticking to one plan will not bring the success. In some cases, there is a need to do some modifications even after going through the testing phase. Hence, you always have to stay ready for any change without making any assumptions. Once Your App is Released, the Work is Over It is seen that every top app development company believes that once the app is developed, the work is done. The expert team of developers of such companies forgets about the maintenance and modifications etc. that could result in the possible bugs in the app. It is possible that the developer may not update the platform with the latest iOS/Android versions, which will also harm your newly developed application. Great Features Mean a Great App It is important to address the fact that great features do not necessarily pave the way for a wonderful app. In fact, there is a need to ask yourself some important questions. How does this feature is beneficial for my users? How does it makes the life of my users easier? If you can’t answer these questions properly, the feature isn’t worthwhile. This Article is Originally published here.
https://medium.com/appdexa/6-mobile-app-development-myths-with-effective-solutions-to-check-b30d11e2da0c
[]
2017-07-28 11:50:23.264000+00:00
['Development', 'Technology', 'Mobile App Development', 'Mobile Apps', 'Apps']
614
Charm City Kings — (Movie 2020) Watch Or Download, [Full-HD] On HBO
☑ Watch Exclusive Charm City Kings (2020) Full Movie ☑ Watch Charm City Kings (2020) : Full Movie Online Free Mouse desperately wants to join The Midnight Clique, the infamous Baltimore dirt bike riders who rule the summertime streets. When Midnight’s leader, Blax, takes 14-year-old Mouse under his wing, Mouse soon finds himself torn between the straight-and-narrow and a road filled with fast money and violence. WATCH FULL MOVIE Charm City Kings (2020) [HD 1080p] ✨ Charm City Kings (2020) : Complet en francais ✨ Official Partners “HBO” TV Shows & Movies ✨ Watch Charm City Kings (2020) Full Movie ⇒ https://t.co/nrxEBeWnuD?amp=1 Specialty channels are commercial broadcasting or non-commercial television channel that focus on an individual genre, subject, or targeted television set market at a specific demographic. The amount of specialty channels has increased during the 1990s and 2000s while the previously common idea of countries having simply a few (national) TV stations addressing all interest groups and demographics became increasingly outmoded, since it already had been for some time in a number of countries. About 65% of today`s satellite channels are specialty channels. ●Charm City Kings (2020) full Movie Watch Online ●Charm City Kings (2020) full English Full Movie ●Charm City Kings (2020) full Full Movie, ●Watch Charm City Kings (2020) full English FullMovie Online ●Charm City Kings (2020) full Film Online ●Watch Charm City Kings (2020) full English Film ●Charm City Kings (2020) full Movie stream free ●Watch Charm City Kings (2020) full Movie subtitle ●Watch Charm City Kings (2020) full Movie spoiler ●Charm City Kings (2020) full Movie tamil ●Charm City Kings (2020) full Movie tamil download ●Watch Charm City Kings (2020) full Movie download ●Watch Charm City Kings (2020) full Movie telugu ●Watch Charm City Kings (2020) full Movie tamildubbed download ●Charm City Kings (2020) full Movie to watch Watch Toy full Movie vidzi ●Charm City Kings (2020) full Movie vimeo ●Watch Charm City Kings (2020) 💦 ALL CATEGORY WATCHTED 💦 An action story is similar to adventure, and the protagonist usually takes a risky turn, which leads to desperate scenarios (including explosions, fight scenes, daring escapes, etc.). Action and adventure usually are categorized together (sometimes even while “action-adventure”) because they have much in common, and many stories are categorized as both genres simultaneously (for instance, the James Bond series can be classified as both). 🔥 ANALYZER GOOD / BAD 🔥 To be honest, I didn’t catch Zombieland when it first got released (in theaters) back in 2008. Of course, the movie pre-dated a lot of the pop culture phenomenon of the usage of zombies-esque as the main antagonist (i.e Game of Thrones, The Maze Runner trilogy, The Walking Dead, World War Z, The Last of Us, etc.), but I’ve never been keen on the whole “Zombie” craze as others are. So, despite the comedy talents on the project, I didn’t see Zombieland….until it came to TV a year or so later. Surprisingly, however, I did like it. Naturally, the zombie apocalypse thing was fine (just wasn’t my thing), but I really enjoyed the film’s humor-based comedy throughout much of the feature. With the exception of 2004’s Shaun of the Dead, majority of the past (and future) endeavors of this narrative have always been serious, so it was kind of refreshing to see comedic levity being brought into the mix. Plus, the film’s cast was great, with the four main leads being one of the film’s greatest assets. As mentioned above, Zombieland didn’t make much of a huge splash at the box office, but certainly gained a strong cult following, including myself, in the following years. Flash forward a decade after its release and Zombieland finally got a sequel with Zombieland: Double Tap, the central focus of this review post. Given how the original film ended, it was clear that a sequel to the 2008 movie was indeed possible, but it seemed like it was in no rush as the years kept passing by. So, I was quite surprised to hear that Zombieland was getting a sequel, but also a bit not surprised as well as Hollywood’s recent endeavors have been of the “belated sequels” variety; finding mixed results on each of these projects. I did see the film’s movie trailer, which definitely was what I was looking for in this Zombieland 2 movie, with Eisenberg, Harrelson, Stone, Breslin returning to reprise their respective characters again. I knew I wasn’t expecting anything drastically different from the 2008 movie, so I entered Double Tap with good frame of my mind and somewhat eagerly expecting to catch up with this dysfunctional zombie killing family. Unfortunately, while I did see the movie a week after its release, my review for it fell to the wayside as my life in retail got a hold of me during the holidays as well as being sick for a good week and half after seeing the movie. So, with me still playing “catch up” I finally have the time to share my opinions on Zombieland: Double Tap. And what are they? Well, to be honest, my opinions on the film was good. Despite some problems here and there, Zombieland: Double Tap is definitely a fun sequel that’s worth the decade long wait. It doesn’t “redefine” the Zombie genre interest or outmatch its predecessor, but this next chapter of Zombieland still provides an entertaining entry….and that’s all that matters. Returning to the director’s chair is director Ruben Fleischer, who helmed the first Zombieland movie as well as other film projects such as 30 Minutes or Less, Gangster Squad, and Venom. Thus, given his previous knowledge of shaping the first film, it seems quite suitable (and obvious) for Fleischer to direct this movie and (to that affect), Double Tap succeeds. Of course, with the first film being a “cult classic” of sorts, Fleischer probably knew that it wasn’t going to be easy to replicate the same formula in this sequel, especially since the 10-year gap between the films. Luckily, Fleischer certainly excels in bringing the same type of comedic nuances and cinematic aspects that made the first Zombieland enjoyable to Double Tap; creating a second installment that has plenty of fun and entertainment throughout. A lot of the familiar / likeable aspects of the first film, including the witty banter between four main lead characters, continues to be at the forefront of this sequel; touching upon each character in a amusing way, with plenty of nods and winks to the original 2008 film that’s done skillfully and not so much unnecessarily ham-fisted. Additionally, Fleischer keeps the film running at a brisk pace, with the feature having a runtime of 88 minutes in length (one hour and thirty-nine minutes), which means that the film never feels sluggish (even if it meanders through some secondary story beats / side plot threads), with Fleischer ensuring a companion sequel that leans with plenty of laughter and thrills that are presented snappy way (a sort of “thick and fast” notion). Speaking of which, the comedic aspect of the first Zombieland movie is well-represented in Double Tap, with Fleischer still utilizing its cast (more on that below) in a smart and hilarious by mixing comedic personalities / personas with something as serious / gravitas as fighting endless hordes of zombies every where they go. Basically, if you were a fan of the first Zombieland flick, you’ll definitely find Double Tap to your liking. In terms of production quality, Double Tap is a good feature. Granted, much like the last film, I knew that the overall setting and background layouts weren’t going to be something elaborate and / or expansive. Thus, my opinion of this subject of the movie’s technical presentation isn’t that critical. Taking that into account, Double Tap does (at least) does have that standard “post-apocalyptic” setting of an abandoned building, cityscapes, and roads throughout the feature; littered with unmanned vehicles and rubbish. It certainly has that “look and feel” of the post-zombie world, so Double Tap’s visual aesthetics gets a solid industry standard in my book. Thus, a lot of the other areas that I usually mentioned (i.e set decorations, costumes, cinematography, etc.) fit into that same category as meeting the standards for a 2018 movie. Thus, as a whole, the movie’s background nuances and presentation is good, but nothing grand as I didn’t expect to be “wowed” over it. So, it sort of breaks even. This also extends to the film’s score, which was done by David Sardy, which provides a good musical composition for the feature’s various scenes as well as a musical song selection thrown into the mix; interjecting the various zombie and humor bits equally well. There are some problems that are bit glaring that Double Tap, while effectively fun and entertaining, can’t overcome, which hinders the film from overtaking its predecessor. Perhaps one of the most notable criticism that the movie can’t get right is the narrative being told. Of course, the narrative in the first Zombieland wasn’t exactly the best, but still combined zombie-killing action with its combination of group dynamics between its lead characters. Double Tap, however, is fun, but messy at the same time; creating a frustrating narrative that sounds good on paper, but thinly written when executed. Thus, problem lies within the movie’s script, which was penned by Dave Callaham, Rhett Reese, and Paul Wernick, which is a bit thinly sketched in certain areas of the story, including a side-story involving Tallahassee wanting to head to Graceland, which involves some of the movie’s new supporting characters. It’s fun sequence of events that follows, but adds little to the main narrative and ultimately could’ve been cut completely. Thus, I kind of wanted see Double Tap have more a substance within its narrative. Heck, they even had a decade long gap to come up with a new yarn to spin for this sequel…and it looks like they came up a bit shorter than expected. Another point of criticism that I have about this is that there aren’t enough zombie action bits as there were in the first Zombieland movie. Much like the Walking Dead series as become, Double Tap seems more focused on its characters (and the dynamics that they share with each other) rather than the group facing the sparse groupings of mindless zombies. However, that was some of the fun of the first movie and Double Tap takes away that element. Yes, there are zombies in the movie and the gang is ready to take care of them (in gruesome fashion), but these mindless beings sort take a back seat for much of the film, with the script and Fleischer seemed more focused on showcasing witty banter between Columbus, Tallahassee, Wichita, and Little Rock. Of course, the ending climatic piece in the third act gives us the best zombie action scenes of the feature, but it feels a bit “too little, too late” in my opinion. To be honest, this big sequence is a little manufactured and not as fun and unique as the final battle scene in the first film. I know that sounds a bit contrive and weird, but, while the third act big fight seems more polished and staged well, it sort of feels more restricted and doesn’t flow cohesively with the rest of the film’s flow (in matter of speaking). What’s certainly elevates these points of criticism is the film’s cast, with the main quartet lead acting talents returning to reprise their roles in Double Tap, which is absolutely the “hands down” best part of this sequel. Naturally, I’m talking about the talents of Jessie Eisenberg, Woody Harrelson, Emma Stone and Abigail Breslin in their respective roles Zombieland character roles of Columbus, Tallahassee, Wichita, and Little Rock. Of the four, Harrelson, known for his roles in Cheers, True Detective, and War for the Planet of the Apes, shines as the brightest in the movie, with dialogue lines of Tallahassee proving to be the most hilarious comedy stuff on the sequel. Harrelson certainly knows how to lay it on “thick and fast” with the character and the s**t he says in the movie is definitely funny (regardless if the joke is slightly or dated). Behind him, Eisenberg, known for his roles in The Art of Self-Defense, The Social Network, and Batman v Superman: Dawn of Justice, is somewhere in the middle of pack, but still continues to act as the somewhat main protagonist of the feature, including being a narrator for us (the viewers) in this post-zombie apocalypse world. Of course, Eisenberg’s nervous voice and twitchy body movements certainly help the character of Columbus to be likeable and does have a few comedic timing / bits with each of co-stars. Stone, known for her roles in The Help, Superbad, and La La Land, and Breslin, known for her roles in Signs, Little Miss Sunshine, and Definitely, Maybe, round out the quartet; providing some more grown-up / mature character of the group, with Wichita and Little Rock trying to find their place in the world and how they must deal with some of the party members on a personal level. Collectively, these four are what certainly the first movie fun and hilarious and their overall camaraderie / screen-presence with each other hasn’t diminished in the decade long absence. To be it simply, these four are simply riot in the Zombieland and are again in Double Tap. With the movie keeping the focus on the main quartet of lead Zombieland characters, the one newcomer that certainly takes the spotlight is actress Zoey Deutch, who plays the character of Madison, a dim-witted blonde who joins the group and takes a liking to Columbus. Known for her roles in Before I Fall, The Politician, and Set It Up, Deutch is a somewhat “breath of fresh air” by acting as the tagalong team member to the quartet in a humorous way. Though there isn’t much insight or depth to the character of Madison, Deutch’s ditzy / air-head portrayal of her is quite hilarious and is fun when she’s making comments to Harrelson’s Tallahassee (again, he’s just a riot in the movie). The rest of the cast, including actor Avan Jogia (Now Apocalypse and Shaft) as Berkeley, a pacifist hippie that quickly befriends Little Rock on her journey, actress Rosario Dawson (Rent and Sin City) as Nevada, the owner of a Elvis-themed motel who Tallahassee quickly takes a shine to, and actors Luke Wilson (Legally Blonde and Old School) and Thomas Middleditch (Silicon Valley and Captain Underpants: The First Epic Movie) as Albuquerque and Flagstaff, two traveling zombie-killing partners that are mimic reflections of Tallahassee and Columbus, are in minor supporting roles in Double Tap. While all of these acting talents are good and definitely bring a certain humorous quality to their characters, the characters themselves could’ve been easily expanded upon, with many just being thinly written caricatures. Of course, the movie focuses heavily on the Zombieland quartet (and newcomer Madison), but I wished that these characters could’ve been fleshed out a bit. Lastly, be sure to still around for the film’s ending credits, with Double Tap offering up two Easter Eggs scenes (one mid-credits and one post-credit scenes). While I won’t spoil them, I do have mention that they are pretty hilarious. 🔥 FINAL THOUGHTS 🔥 It’s been awhile, but the Zombieland gang is back and are ready to hit the road once again in the movie Zombieland: Double Tap. Director Reuben Fleischer’s latest film sees the return the dysfunctional zombie-killing makeshift family of survivors for another round of bickering, banting, and trying to find their way in a post-apocalyptic world. While the movie’s narrative is a bit messy and could’ve been refined in the storyboarding process as well as having a bit more zombie action, the rest of the feature provides to be a fun endeavor, especially with Fleischer returning to direct the project, the snappy / witty banter amongst its characters, a breezy runtime, and the four lead returning acting talents. Personally, I liked this movie. I definitely found it to my liking as I laugh many times throughout the movie, with the main principal cast lending their screen presence in this post-apocalyptic zombie movie. Thus, my recommendation for this movie is favorable “recommended” as I’m sure it will please many fans of the first movie as well as to the uninitiated (the film is quite easy to follow for newcomers). While the movie doesn’t redefine what was previous done back in 2008, Zombieland: Double Tap still provides a riot of laughs with this make-shift quartet of zombie survivors; giving us give us (the viewers) fun and entertaining companion sequel to the original feature.
https://medium.com/@leoroxify/charm-city-kings-movie-2020-watch-or-download-full-hd-on-hbo-8a7999ebb844
[]
2020-10-13 01:58:50.633000+00:00
['Technology', 'Startup']
615
Echo Frames review: Alexa on your face is both helpful and annoying
Amazon: Sort out the messy notifications situation, please It’s been more than a year since Amazon introduced the Echo Frames to the world, and this month the connected glasses finally became widely available for purchase. For $250, you get hands-free access to Alexa wherever you go. Additionally, you can take calls, play music and hear your notifications through the device’s open-ear speakers, while still being aware of your surroundings. To be clear, though, that’s pretty much all the Echo Frames do; there’s no display or camera. Still, there’s potential here, especially for people who already wear glasses and want to interact with their phone without touching it. Design Compared to most smart glasses I’ve tested, the Echo Frames are surprisingly comfortable. I sometimes forgot at the end of the day that I still had them on. Only the medium/large size I received is currently available, but as someone with a fairly wide face I liked the fit. Amazon provides helpful measurements and a size guide on its site, which is redundant while there’s only one size available, but it’ll help you choose well. Pros Cons Notifications are a mess Poor audio quality and leaking Only one frame shape for now Summary The Echo Frames offer a convenient way to speak to Alexa all the time and take calls or listen to podcasts while remaining aware of your surroundings. The device is comfortable enough for all-day wear and can be fitted with prescriptive lenses so glasses-wearers will get the most use out of this. Amazon needs to iron out some quirks with notification management for now, and you won’t want to listen to music with these, but if you can overlook those issues you’ll find the Echo Frames surprisingly useful. Despite having thicker arms that house components like speakers and mics, the Frames didn’t feel clunky. In comparison, devices like the Bose Frames or Snap Spectacles are simply not as well-made or comfortable. If you don’t like Amazon’s squarish design here or were hoping for more color options, you’re out of luck. My review unit is in plain black and there are just two other variants available: tortoise and and “horizon blue,” whose color-blocked design has a blue bottom half and black top. Those looking for rounder frames or something more Warby Parker-like will have to wait till the company releases more styles. (No word yet on if or when that might be coming.) Unlike the Bose Frames, which are sunglasses, Amazon’s version comes with clear lenses and you can fit them with prescription ones too. This makes it far more useful, since you can wear them around the house or at night. Gallery: Amazon Echo Frames review pictures | 10 Photos Under the Echo Frames’ right arm sit buttons for volume control, a charging port, as well as what Amazon calls an action key. You can press this to access Alexa, and double click to mute the mics. Holding this down turns the Frames on or off. There’s also a touch-sensitive panel on the right side where you can swipe to hear details of a notification or tap to dismiss it, which is completely counterintuitive. And, to make matters worse, it sometimes confused my swipe for a tap and dismissed a message I wanted to hear. There’s a tiny LED in the frame above the right lens that’s only visible to the person wearing the glasses. When you say “Alexa” or press the action button, this lights up to signal that Alexa is listening, though it’s a little hard to see. In use I have an Echo speaker at home, and one of my biggest concerns was that saying “Alexa” would trigger both devices. Barring some early hiccups immediately after setup, Alexa usually responded on the Frames when I was wearing them. I did enable Auto Off, though, which turns off the glasses three seconds after I remove and place them upside down on a surface. Alexa is as responsive here as it is on Echo speakers — that is, very responsive. Amazon’s devices are consistently faster at waking up and doing what I ask than my Google speakers. Perhaps the best part, though, was that I no longer had to yell at my speaker from my kitchen and could speak at a normal volume to turn on the lights. And if you have ambient noise in the background, like music or running water, the Frames can still hear you since it’s right on your face. I also really like the fact that the Echo Frames allow you to listen to whatever you want on your phone without bothering others around you or putting on headphones. The open-ear design does make it possible to hear what’s in your surroundings while Instagram Stories, for example, play through the speakers. Like the Bose Frames, Amazon’s wearable uses a projection technique that beams sound into your ears so that, theoretically, nobody nearby can hear what you’re listening to. Brian Oh / Engadget In reality, though, this is only really effective at lower volumes and with primarily vocal content, like a phone call or podcast. Even at just 30 percent volume, my coworker Brian Oh could still hear G-Idle’s Oh My God from six feet away. Also, I just wouldn’t listen to music on Echo Frames. Anything I played, including Taylor Swift’s Willow, sounded hollow and had zero bass. Plus, even at max volume, I struggled to hear my music on the Frames over the songs an Uber driver played on their car’s speaker. Amazon offers a feature called Auto Volume that adjusts the Frames’ volume so you can hear your music over your surroundings. The problem is, this never seemed to work; every now and then I’d see the volume slider appear on my phone’s screen, indicating that the Frames were adjusting the levels. But the effect was never obvious and in noisy environments I struggled to hear sound from the speaker. Aside from acting as headphones for your music and calls, the Frames also help you stay on top of your alerts by reading them to you throughout the day. But this was also problematic. During my first few days with it, every single notification that popped up on my phone would result in a ping and description. That became even more irritating when, because of an active WhatsApp group chat, I was bombarded by nonstop alerts that quickly got annoying and distracting. Alexa also drove me nuts by announcing I had a message from “Device Personalization services” every time a new song started on Spotify. This is partly due to the way Android handles notifications and early Bluetooth-based wearables suffered from similar issues. But even after I tapped to dismiss the first two such alerts, which Alexa says will keep you from hearing it again, I kept getting pinged every time a song changed. Brian Oh / Engadget You can turn on the VIP Filter setting in the Alexa app to remedy that by choosing what apps to hear from on the Frames. But the selection of apps is pretty limited at the beginning; only apps that had already sent a notification through would show up, meaning you could only choose to disable something after it pings you once. I only allowed my frequent apps like WhatsApp, Telegram, Discord and Instagram through the Filter, but an annoying alert from Peacock still got through after I enabled VIP Filter. You have to choose the “Pause new requests” option here to stop this, which is an extra step to take. VIP Filter is also a bit of a misnomer. Since Amazon doesn’t currently recognize Telegram and Discord, among others, as messaging apps, you won’t be able to prioritize specific contacts’ chats. So far, only WhatsApp is identified, and I can whitelist people and groups within it. Amazon clearly still has a lot of work to do to improve how the Frames handle notifications, but the company acknowledges this is a new product and says it’s “always interested in hearing customer feedback.” The company says the Frames will last for two hours of mixed use on 80 percent volume, and up to four hours of continuous playback. I generally left the volume at 50 percent or lower, and I got a day or more on a charge. I used the Frames to chat with my friend for an hour before playing music for about 20 minutes at about 80 percent volume and the battery dropped from 100 percent to 60 percent. The competition Amazon is far from the first company to make smart glasses. It’s not even the first to try its hand at Bluetooth frames with embedded speakers. Bose even had the same name for its: Frames. There are a few key differences, for one, since Bose’s $200 Frames are a pair of sunglasses, you’re not gonna wear them at night. Bose’s device also doesn’t have a touch panel, and its single button controls are fairly simplistic. I also found them clunky, tight and quite uncomfortable after about an hour of wear. The company has since released newer “Sport” models for $50 more, but I haven’t tried them and can’t tell you if they’re more comfortable. Bose does offer better audio projection than Amazon, though, and I only noticed sound leakage from the Bose Frames at higher than 80 percent volume. Wrap-up Despite some complaints, I’m surprised by how much I like the Echo Frames. I found myself putting them on several times a day even though I don’t need glasses. They’re great for when I want to hop on a call while still being able to listen out for a delivery person, or when I need Alexa to hear me over the music my Google speakers are blaring. If you actually need glasses, they might be even more useful. Amazon still needs to iron out some kinks, particularly with notification handling, but as long as you don’t need high-quality audio and can live with annoying alerts, the https://www.fiverr.com/share/0K9WRa
https://medium.com/@pacegrid/echo-frames-review-alexa-on-your-face-is-both-helpful-and-annoying-bad7ea1a57c4
[]
2020-12-24 10:44:01.284000+00:00
['Technology', 'Marketing Strategies', 'Health', 'Marketing', 'SEO']
616
How Spam, Online Scams Takes Places In Online World
Spam messages What is Spam ? Technology has developed at a rapid pace. Along with the good comes bad there is always people who are trying to misuse the technology for spamming the inbox messages. Spam refers to sending of bulk messages through email, instant messaging or any other digital communication tools. Types Of Spam Email Spam 2. Messaging Spam 3. Forum Spam How To Avoid Spam? There isn’t much you can do with spam messages. The best option is to ignore the spam messages, If you reply it is going to know your an active user and you may get more of those. There are technologies like spam filter which is available for free as well as paid version. That will block all the spam related mails or messages from email. Identity Theft ? Online identity theft is the theft of personal information in order to commit any sort of fraud. These types of theft usually takes place by hacking into any of the system in order to gain personal information like card details, account details or any social media passwords. A related concern is identity spoofing, in which the victim is impersonated on social networking sites such as Facebook or Twitter. How Can One Avoid Scams ? The best defences to these online scams and frauds generally is to rely on caution when using the Internet. For example:
https://medium.com/@epsecuritymag/how-spam-online-scams-takes-places-in-online-world-70f64b3c62a4
['Enterprise Security Mag']
2020-12-18 05:35:10.688000+00:00
['Scam', 'Technews', 'Spam', 'Technology', 'Online']
617
How to Continuously Deliver Kubernetes Applications With Flux CD
Authorise Flux CD to Connect to Your Git Repository We now need to allow the Flux CD operator to interact with the Git repository, and therefore, we need to add its public SSH key to the repo. Get the public SSH key using fluxctl . $ fluxctl identity --k8s-fwd-ns flux ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCryxSADyA+GIxtyCwpO3R9EuRcjZCqScKbYO246LZknyeluxKz0SlHYZHrlqxvla+k5GpPqnbImLLhuAD+YLzn0DbI58hUZLsrvxPWKiku--REDACTED--MKoPyEtQ+JiR3ZiADx6Iq8tYRRR+WBs1k5Hc8KNpg+FSRP8I8+CJRkCG4JQacPwK8FESP4qr1dxVv1tE8ZXyb8CdiToKpK7Mkc= root@flux-b9b4cc4f9-p9w88 Add the SSH key to your repository so that Flux CD can access it. Go to https://github.com/<YOUR_GITHUB_USER>/nginx-kubernetes/settings/keys Add a name to the Key in the Title section. Paste the SSH Key in the Key section. Check “Allow write access.” Flux CD synchronises automatically with the configured Git repository every five minutes. However, if you want to synchronise Flux with the Git repo immediately, you can use fluxctl sync , as below. Synchronizing with ssh:// Revision of master to apply is 8db9163 Waiting for 8db9163 to be applied ... Done. $ fluxctl sync --k8s-fwd-ns fluxSynchronizing with ssh:// [email protected] /bharatmicrosystems/nginx-kubernetesRevision of master to apply is 8db9163Waiting for 8db9163 to be applied ...Done. Now let’s get the pods to see if we have two replicas of nginx. $ kubectl get pod -n web NAME READY STATUS RESTARTS AGE nginx-deployment-7fd6966748-lj8zd 1/1 Running 0 20s nginx-deployment-7fd6966748-rbxqs 1/1 Running 0 20s Get the service, and you should see an nginx load balancer service running on port 80. If your Kubernetes cluster can spin load balancers, you should see an external IP appear. $ kubectl get svc -n web NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-service LoadBalancer 10.8.10.33 35.222.174.212 80:30609/TCP 94s Test the service using the external IP. If your cluster does not allow you to spin load balancers, you can use the NodeIP:NodePort combination. This is version 1 $ curl http://35.222.174.212/ This is version 1 Update the image to bharamicrosystems/nginx:v2 on workloads/nginx-deployment.yaml $ sed -i "s/nginx:v1/nginx:v2/g" workloads/nginx-deployment.yaml $ git add --all $ git commit -m 'Updated version to v2' $ git push origin master Let’s now wait for five mins for the auto-sync, and meanwhile, watch the pods updating. $ watch -n 30 'kubectl get pod -n web' NAME READY STATUS RESTARTS AGE nginx-deployment-5db4d6cb84-8lbsk 1/1 Running 0 11s nginx-deployment-5db4d6cb84-qc6jp 1/1 Running 0 10s nginx-deployment-6784c95fc7-zqptk 0/1 Terminating 0 6m43s And as you see, the old pods are terminating, and new ones are spinning up. Check the pod status to ensure all pods are running. $ kubectl get pod -n web NAME READY STATUS RESTARTS AGE nginx-deployment-5db4d6cb84-8lbsk 1/1 Running 0 1m nginx-deployment-5db4d6cb84-qc6jp 1/1 Running 0 1m Now let’s call the service again. This is version 2 $ curl http://35.222.174.212/ This is version 2 And as you see, the version is now updated to v2. Congratulations! You have successfully set up Flux CD on your Kubernetes cluster.
https://medium.com/better-programming/how-to-continuously-deliver-kubernetes-applications-with-flux-cd-502e4fb8ccfe
['Gaurav Agarwal']
2020-05-27 16:08:00.390000+00:00
['Programming', 'Software Engineering', 'Kubernetes', 'DevOps', 'Technology']
618
13 Conda Commands for Data Scientists
Let’s get to it! 🚀 The Need Whether working locally or on a server in the cloud, you want a virtual environment to isolate your Python version and packages so that you can: use different package versions for different projects experiment with bleeding edge package versions get the same Python and package versions as your teammates so your shared code works similarly Do I have to use Conda? Nope. It’s very common in data analysis and scientific computing, but not so common outside. There are lots of tools you can use to manage virtual Python environments. conda, Venv, pyenv, pipenv, poetry, Docker, and virtualenv are the most prominent. Trust me, you don’t even want to go down this rabbit hole. 🐇 But if you must, here’s a Hacker News thread for a taste of the fun. And Yufeng G, Developer Advocate for Google Cloud, has a nice video on conda, virtualenv, and pyenv here. Today we’re exploring conda, so let’s dive in! Anaconda Anaconda is both the name of the company behind the software and the name of the fully featured software distribution. The Anaconda company offers consulting and three Anaconda editions: individual, team, and enterprise. Individual is common and powerful. It’s not like there are freeware gotchas. 😉 It’s what I use. Distributions: Anaconda vs Miniconda You can choose to download and install one of two distributions: Anaconda or Miniconda. Similarities Both Anaconda and Miniconda get you conda, the package manager and virtual environment manager in one. Both can be installed on Windows, Mac, or Linux systems. Both come with Python and pip installed. Differences The Anaconda distribution installs many Python packages and tools that are common for data science. Miniconda does not. It’s the difference between a big, batteries-included download and a minimal download without many packages installed for you. Miniconda install ranges between 50 and 90 MiB in mid-2020, depending on your operating system . The Anaconda distribution requires a “minimum 5 GB disc space to download and install” — according to the docs in mid-2020. The Anaconda distribution installer also includes options to install a GUI and other popular software such as the VSCode text editor. If you are getting started with scientific computing using Python and have the space on your computer, I suggest you download the Anaconda distribution. It’s a lot of stuff, but it’s good stuff. 👍 If you’ve worked with conda previously, I suggest you download Miniconda. You can add just the packages you need as you go. 😀 See install instructions for Anaconda here and for Miniconda here. Source: pixabay.com Conda commands If you installed the Anaconda distribution you can use the GUI installer, but I suggest using the command line interface to help you work faster. You are a programmer after all (or becoming one)! 😉 Let’s look at common conda commands to create and manage conda environments. Everything is the same whether you installed Anaconda or Miniconda. It’s all conda at this point. 😀 Create conda environments conda create -n myenv pandas jupyterlab Create a new conda environment named myenv with the latest version of Python available on the main conda channel. Install pandas and jupyterlab packages into the environment. -n is short for --name . Conda asks for confirmation with package changes. Press y when asked whether you want to proceed. conda create -n myclone --clone myenv Duplicate the environment named myenv into the new environment named myclone. All the packages come along for the ride! 🚙 conda create -n myenv python=3.8 Create a new conda environment named myenv with the Python version specified. Pip and about 15 other packages are installed. Only snake picture in this guide, I promise. Source: pixabay.com Manage conda environments conda activate myenv Activate the myenv virtual environment. You can tell you are in an active virtual environment because your command line prompt will start with the name of your virtual environment in parentheses. Like this: (base) Jeffs-MBP:~ jeffhale$ The default conda environment is named base. Conda can be set up to activate an environment whenever you open your terminal. When installing the software the installer will ask you if you want to initialize it “by running conda init". I suggest you say yes so that your shell scripts get modified. Then you’ll start your terminal sessions in a conda environment. Wherever you navigate in your on CLI, you will be within your activated conda virtual environment. 🎉 conda deactivate — Deactivate the current environment. Now you won’t be in a virtual environment. The conda environment you were in still exists, you just aren’t in it. 👍 conda env list — List conda environments. conda env remove --name myenv — Remove the myenv conda environment. Delete it forever. Source: pixabay.com Manage packages in active environment Use these commands from within an active conda environment. These are the conda commands I use most frequently. 😀 conda list — List installed packages in the active environment. The output includes the package version and the conda channel the package was installed from. 👍 Conda packages conda install pandas — Download and install the pandas package from the main conda channel. conda install -c conda-forge pandas — Download and install the pandas package from the main conda-forge channel. The conda-forge channel is “A community-led collection of recipes, build infrastructure and distributions for the conda package manager.” Use the conda-forge channel when a package doesn't exist on the main channel or the version on the main channel isn't as new as you would like. Packages often get updated on conda-forge before the main conda channel. ☝️ You can also specify some other channel, but conda-forge is where you will find many packages. conda update pandas — Download and install the latest pandas package from the main conda channel. conda update all — Download and install the the latest versions of all installed packages. Can be very slow if you have a lot of outdated packages. ⚠️ conda uninstall pandas — Uninstall the pandas package in your conda environment. Pip Pip is the most common package installer. It ships with Python. When in an active conda environment, pip will install packages into that active conda environment. ⚠️ pip install -U pandas — Install or update the pandas package from PyPI, the Python package index. -U specifies to update all dependent packages. pip uninstall pandas — Uninstall the pandas package that was installed by pip. PyPI is the most common place to find Python packages. PyPI often has packages that are not on conda or conda-forge. PyPI often gets the latest version of packages first. Occasionally conda has packages that aren’t on PyPI, but that is less common. Conda and PyPI packages generally play together nicely. 🎉 Make sure that if you are installing a version of a package from PyPI you don't already have the same package installed from conda. If you do have it installed, then uninstall the conda version first. Otherwise conda won’t use the new PyPI version. By default, the official conda channel is the highest priority channel and conda will use package versions from higher priority channels first.☝️ The official conda recommendation is to try to install packages from the main conda channel first, then conda-forge as necessary, and then pip. This helps avoid conflicts. In general, installing from any of those sources should work fine. If things ever go sideways Sometimes things get messed up. Generally uninstalling packages or creating a new conda environment is all you need to fix the situation. If that doesn’t work, conda update conda updates the conda software itself. If that doesn’t work, you can uninstall and reinstall conda. 👍 There are lots of conda users out there and lots of documentation, so as always, the search engine is your friend. 😉 Source: pixabay.com Wrap I hope you’ve enjoyed this overview of conda. If you did, please share it on your favorite social media so other folks can find it, too. 😀 I write about Python, SQL, Docker, and other tech topics. If any of that’s of interest to you, sign up for my mailing list of awesome data science resources and checkout out some of my 60+ articles you grow your skills here. 👍
https://towardsdatascience.com/13-conda-commands-for-data-scientists-e443d275eb89
['Jeff Hale']
2020-09-26 12:53:28.827000+00:00
['Machine Learning', 'Data Science', 'Python', 'Artificial Intelligence', 'Technology']
619
Blog Series Part 3: Rules and Regulations of Medical Device Product Development
When I started working for companies that built medical devices, I was completely unaware of the legal and regulatory aspects of product development. I thought the values and principles of healthcare guide this process — empathy, ethics, and professionalism. But just like in any other industry, commercial interests can be quite dominant, and often take precedent over all other domains, opening room for significant errors and liabilities in the products placed onto the market. In some industries, the consequences may not be so dramatic, but in healthcare, we can expose patients to significant harm when interacting with products and services we build or provide. To prevent this from happening, this space is guided by two broad sets of regulations that every medical device manufacturer has to follow: Regulations on how to build a medical device — Whether it is hardware and/or software, every manufacturer must adhere to a wide range of regulations that cover the manufacturing process and its documentation, analysis of risk, etc. For all countries in the European Union, the famous “medical device regulations” (known as MDR) that will be implemented in May 2021 and will replace the old “medical device directive” (MDD), is one of the most important documents to read for all startups and companies that are aiming to sell their products in the EU. The FDA has its own set of regulations that govern medical device manufacturing and market approval in the US. China, Brazil, UAE, Australia, Canada, and perhaps a few other countries have their own rules and regulations that are more or less similar to those imposed in the EU or the US. Regulations on data privacy and security — By now, you have probably heard the word “GDPR” (standing for General Data Protection Regulation) at some point, denoting a set of regulations developed by the EU to ensure safety and security of data that is acquired from individuals. Health-related data are one of the most sensitive forms of personal information, and the list of companies who can profit from them is not short. For this reason, rigorous standards about ownership, archiving, storage, processing, and controlling the data must be met when building and selling medical devices. Both of these guidelines must be thoroughly reviewed before you start actually start building your product becauseof several reasons: The intended use and level of associated risk (among other things) dictate the classification of your product. I will write a detailed article on what “ intended use ” is because it is the single most important thing your company and product have to define early in the process. More often than not, this is done late in the process, and its change (or absence) may cause irreversible damage to your strategy and timelines, sometimes ending in complete catastrophe. and level of associated risk (among other things) dictate the classification of your product. I will write a detailed article on what “ ” is because it is More often than not, this is done late in the process, and its change (or absence) may cause irreversible damage to your strategy and timelines, sometimes ending in complete catastrophe. The countries in which you want to sell your device will dictate your entire regulatory strategy and the manufacturing proces s. The documentation you need to create throughout the process of development may significantly differ if you are trying to sell your product in the US, in Brazil, or in China. Ensuring compliance with any of the required processes and their documentation is impossible to do retrospectively, so there is little room for corrections in those last few weeks or months. s. The documentation you need to create throughout the process of development may significantly differ if you are trying to sell your product in the US, in Brazil, or in China. Ensuring compliance with any of the required processes and their documentation is impossible to do retrospectively, so there is little room for corrections in those last few weeks or months. Compliance to regulations will ensure quality of your product (and company). Regulations are often considered as barriers of innovation, something to tick off and just focus on the “important stuff” — these assumptions are simply wrong. The regulations are there to help companies ensure patient safety while solving their particular problems. For both sets of guidelines to be appropriately followed and implemented, each medical device company must have: A regulatory consultant — in charge of steering the product development process and ensure compliance with regulations. — in charge of steering the product development process and ensure compliance with regulations. A data protection officer (DPO) — ensure proper setup of data infrastructure and legal documentation surrounding data security. How does the world of regulations affect each of the areas I mentioned in my first article of the series? Clinical — Depending on the classification of the device and its intended use, the clinicians have a task to generate evidence and prove that the device you are building is, among many things, safe to use and “does what it is supposed to do — fulfill its intended use”(again, the intended use comes up). This may range from simple in-house testing to a prospective clinical validation in a hospital setting with a full ethics committee and medical authority approval, such as the MHRA in the UK, or BfArM in Germany. Commercial & Business — Timelines, delivery of the product, and funding required all depend on the regulatory requirements your product needs to comply with. Getting a medical device to market, particularly those that are higher up the classification ladder, is not something that can be done quickly. This has to be clearly and transparently laid out to the upper management and ensure realistic expectations. Company Relationships —It is safe to say that the work on regulatory compliance can be stressful at times — creating and signing off specifications, documenting all of the work, and adapting to client/market feedback are just some of the activities that various people from different teams have to participate in. Creating and maintaining a healthy relationship, both within a product team and horizontally with the manager/commercial teams, is equally important as all the work that goes into the medical device development. Technology — Whether it’s the materials used for creating a hardware device, or the cloud infrastructure for software as medical devices (known as SaMDs) they all have to abide by rules and regulations. The latter particularly focuses on the flow of data and access to it — this is where your DPO plays a massive role. Don’t even think about doing this yourself, there are significant legal liabilities involved in data misuse or breach. Penalites include up to 4% of all company revenue or up to 20 Million EUR! Plus, all of the work needs to be meticulously documented, and it is better to get acquainted with that early on — it might be subjected to an audit! Innovation — As we mentioned previously, regulations may seem as limitations to innovation and use of new technologies, particularly to commercial/executive teams. Quite the opposite. In my opinion, if read and interpreted properly, regulations can actually help you focus on the problems you are trying to solve and guide you to the best use of your technology. Ever since I started listening to Dr. James Somauroo’s “HS HealthTech” podcast, the recurring mantra of being “problem-led vs. solution-driven” helped me solidify the fact that the problem you are trying to solve will dictate how will you solve it, not the other way around. Customer Feedback — According to regulations, you have to set up a plan on how will you obtain feedback and complaints from your users, but most importantly, how will you identify any potential misuse or deficient product behaviour. As one regulatory consultant said to me years ago: “It is fairly easy once the product is in the market. You just listen to your customers and users, and iterate based on their feedback.” This is so true, but setting up that process takes proactive effort. Why is this important for clinicians to know? Because of our background, and the fact we inherently put patient safety first, the regulations surrounding medical devices will not come as a burden to clinicians and medical professionals in this space. It will come as a relief, as a confirmation that there are also other people who share the same values as us, which is why I think we often have the best relationships with our colleagues from the regulatory space. In fact, we will often be the (only) voice of support of reg consultants, and our job as clinicians is to be the bridge between the commercial goals and our regulatory activities. For DPOs, we must help them in ensuring our policies around data acquisition and use are in accordance with data privacy and security regulations — this too requires effort to be set up. In most articles and podcasts, you will read or hear about companies who embrace regulations, and the ones who don’t — as clinicians, we must do everything we can to put our teammates and managers in a position to be the former.
https://medium.com/@alexdespotovicmd/blog-series-part-3-rules-and-regulations-of-medical-device-product-development-667cfc2f1215
['Alex Despotovic']
2020-12-09 18:07:46.816000+00:00
['Medical Devices', 'Healthcare Technology', 'Product Development', 'Digital Health', 'Healthtech']
620
Anthony Padilla is offering NordVPN discount — become safe online for a good price
Anthony Padilla gained his popularity after creating YouTube channel with Ian Hecox named Smosh. After multiple food battles, attempts to figure out how games or movies would actually look in real life and adopting Charlie the Drunk Guinea Pig found inside a dumpster, Anthony decided to leave Smosh and launch his personal YouTube career, You can witness Anthony spending his days with nudists, Area 51 raiders, celebrity impersonators and ex Uber drivers. Not only Anthony delivers great content but he is also offering a NordVPN discount — now you can become secure online for a great price. How to get a NordVPN discount from Anthony? Just click on the link below: Anthony Padilla NordVPN 3-year subscription 70% discount After clicking on the link and entering your payment details, at the checkout you will notice that the discount code has been applied to your purchase, all that is required from you is to choose a payment method: What does NordVPN offer? With a 70% discount on a 3-year subscription from Anthony Padilla you will become secure online instantly. Military Grade encryption ensures your data is safe whenever you browse online when connected to a public Wi-Fi network, and accessibility to more than 5000 servers in 62 countries will let you bypass almost any geographical restrictions — You will be able to stream US Netflix content when connecting from any country outside of the US and scroll Twitter feed when travelling in China. Never heard of Anthony Padilla? Daniel Anthony Padilla, is a 31-year old American YouTube video maker, producer, stand up-comedian, and video blogger. He is best known as the co-founder and co-proprietor of the renowned YouTube channel Smosh, alongside his friend Ian Andrew Hecox. Anthony also released multiple comedy music albums through Smosh, including Sexy Album in 2010 and If Music Were Real in 2011. By his own account, Padilla has said that he was very introverted as a child, due in part to anxiety issues that would affect him when he received attention from others. He has also spoken about having aspirations to get into game design when he was a child. Anthony isn’t the only YouTube star that chose NordVPN to secure himself online: Double Toasted and OSP use NordVPN too. Why NordVPN? Here’s a few more reasons why Anthony Padilla chose NordVPN: NordVPN is offering 5000+ servers located in 62 countries allowing you to browse almost from any location in the world — with that kind of variety in servers you can bypass almost any geographical restrictions out there. An Inbuilt Ad blocker will protect you from malicious ads you may encounter when browsing. You can simultaneously connect to different VPN servers from 6 devices no matter what OS they’re running whether it’s iOS, Android, Windows, Linux or macOS. NordVPN is based in Panama, a country where no mandatory data retention laws exist — once you connect to a VPN server you can stay assured that your personal data isn’t collected and sold afterwards.
https://medium.com/@nathanelnicols/anthony-padilla-nordvpn-discount-offer-d4b4291a3989
['Nathanel Nicols']
2019-09-30 12:59:59.251000+00:00
['Cybersecurity', 'YouTube', 'Privacy', 'Technology', 'VPN']
621
Security Researcher Adds Spy Chip to IT Equipment for Just $200
by Ryan Whitwam Credit: Getty Images Malware scanners can protect your devices from malicious software, but what about malicious hardware? Implanting covert spy chips could give a bad actor unlimited access to your data, which is why Bloomberg Business’ SuperMicro report last year was so worrying. No evidence has surfaced to support those claims, but sneaking spy chips into hardware isn’t impossible, In fact, one security researcher says he’s figured out a way to do it for about $200 in his basement. In the Bloomberg story, sources claimed that Chinese state-sponsored hackers had secretly added small chips to SuperMicro’s server motherboards. These boards were later used in Apple and Amazon servers. The chips were allegedly tiny, no larger than a grain of rice. So, it was understandable they had snuck in under the radar. However, every company named in the story has denied it, and external reviews of SuperMicro boards found no such chips. We know intelligence agencies like the NSA routinely insert spy chips into devices during transit, and security researcher Monta Elkins claims to have developed a version of that technique with off-the-shelf hardware. All Elkins needed was a $150 air-soldering tool, a $40 microscope, and some tiny programmable chips used in personal electronics projects. Elkins approach uses an ATtiny85 chip salvaged from Digispark Arduino boards, each of which costs around $2. The chips have a total surface area of about 5mm, more than small enough to go unnoticed on a circuit board. You can see the chip indicated below, but Elkins says he could have made it even more stealthy if he hadn’t wanted to show the chip placement to fellow hackers. As a proof of concept, Elkins created code for the chip that allowed him to interface with the administrator settings on a Cisco ASA 5505 firewall. Elkins says he chose that model because it was the cheapest one he could find on eBay, but the attack should work on all similar systems. When the compromised board boots up, the chip triggers the firewall’s password recovery feature and creates a new administrator account. An attacker could use that account to monitor network activity and steal data. Elkins plans to reveal all the details of his project at the upcoming CS3sthlm security conference, but he’s not trying to prove Bloomberg’s report is accurate. Instead, he wants everyone to realize implanting spy hardware is trivially easy regardless of whether that report was true. It only cost him $200 to devise a strategy to do it, and a state-sponsored hacker with access to chip fabrication could make much more stealthy custom designs for a few thousand dollars. Now read:
https://medium.com/extremetech-access/security-researcher-adds-spy-chip-to-it-equipment-for-just-200-d7c58159b4a3
[]
2019-10-11 13:41:27.743000+00:00
['Privacy', 'Information Technology', 'Security', 'Networking', 'Surveillance']
622
Does your business need a CRM? 10 Warning Signs
Although Customer Relationship Management (CRM) software has become a burgeoning phenomena, there is no wisdom in simply jumping on the bandwagon and investing in CRM technology. Firstly, you should know what CRM software is and how it will impact your business operations. Secondly, you should look for the following ten warning signs which might indicate that it’s about time you get a CRM software. 1. You are still using excel sheets to run your business Using manual processes like spreadsheets result in business operations being highly inefficient. It wastes precious time of your employees doing repetitive tasks which leads to a decline in productivity. Also, employees these days find working on Excel sheets extremely boring. Your business can take a lot of mileage by using a CRM as it eliminates the need to do repetitive tasks manually through automation. Some of the most common applications of workflow automation in a CRM are marketing automation, sales automation and customer support automation. All these will make your business hours more productive and fruitful by finishing routine tasks for you without any hassle. 2. You do not have a single point of consolidated and centralized customer data The more information you have about your customers, the better strategic decisions you can make. However, a plethora of information is of little value if it is being stored in Google sheets, business cards, handwritten notes etc. A CRM can give you a 360 degree view of your customers by allowing customer-facing employees across different departments maintain every interaction they have in one central database. Information could be accessed and updated in real-time. 3. Your salespersons have insufficient knowledge about your customers As your customer base grows, it becomes increasingly difficult for salespersons to remember each and every detail related to a customer since there’s no database to give you instant consolidated information about the customers. If customers get the slightest hint that they are being neglected, they will simply walk away. That will be downright damaging for your business. With a CRM in place, salespersons will have all the relevant data about the customers at their fingertips e.g past calls, meeting, emails etc. They will know who the customer is and what product he is interested in based on the past interactions. Hence, a CRM will enable the salespersons to serve the customers in a more personalized manner. 4. You are treating all customers in a similar fashion The need to know your customer base for targeted marketing has become more important than ever. If you are targeting every customer with the same promotional content, you are making a big mistake. Every prospect needs individual attention depending at which stage they are in the sales life-cycle. A CRM system gives you the ability to segment leads based on their interests, needs, preferences, demographics, industry vertical and more. It helps salespersons in planning effective strategies to move the leads from one stage of the sales funnel to another quickly. 5. Your marketing and sales departments lack coordination The marketing department is responsible for nurturing leads whereas the sales department is accountable for closing deals. Both need to work in harmony to ensure that leads are converted into opportunities and finally deals. With CRM, marketing and sales departments can now stay updated by having access to real-time data related to a customer’s profile. Marketing team can pass on the leads to the sales team without any manual effort. The sales team can then act on those leads and try to convert them into deals.
https://medium.com/business-startup-development-and-more/does-your-business-need-a-crm-10-warning-signs-d818d07a8299
['Phillips Campbell']
2017-06-20 12:29:45.920000+00:00
['Marketing', 'Business', 'Customer Service', 'CRM', 'Technology']
623
Mastering Online & Virtual Meetings
Mastering Online & Virtual Meetings What Gear to Buy, How to Set It Up, & How To Look Your Best It feels like this a lot, doesn’t it? This piece will provide you with some of the best tools and practices to help anchor your virtual meetings whether you’re a teacher, a manager, a technologist, or just a good worker bee. The goal is to set you up for success and to make you feel like you’ve mastered all of the basics and intermediate skills when it comes to being in front of a computer with a camera. So… let’s dig in! The technology you own and use matters. So does the way you set it up. So let’s talk about what products you should consider owning AND how best to use it. Without further delay… let’s dive into the hardware. THE HARDWARE TO USE I’ll only focus on gear that’s affordable, portable, easy to set up, and fits on your desktop. If you were to buy all of the products I recommend here, you’d invest a total of about $200. That’s it. If you’re on a tight budget, I encourage you to view this gear as a reasonable investment to further your career and help you stand out from the crowd when it comes to virtual teaching, coaching, or facilitation. If cost is an issue, you can often find these products used or refurbished and in great condition. Also, consider using any professional development funds offered by your school to make the purchase. Your Camera First Choice Camera: Logitech HD Pro Webcam C920 — about $100 While the webcam on your Mac or PC might suffice for basic chats with friends and family, you’ll want a camera that works in full high-definition video (also known as 1080p) if you’re teaching, facilitating, managing, or counseling online. Go with what tens of thousands of people have reviewd as one of the best webcams on the planet: the C920 series from Logitech. It’s one of the most affordable options to make your image look crisp and professional. It sells currently for between $75-$125. It clips to the top of your laptop or desktop monitor. Most importantly, it’s one of the most beloved and well-reviewed webcams on the market. I use this model personally and LOVE IT. 2nd Choice Camera: the Amcrest 1080p webcam — about $40. For less than HALF the cost of the Logitech, you can still get a great high-definition camera that’s amazing. I got my hands on one of these last week and am very impressed. Even the microphone it has on board is pretty decent. Your Camera Setup Whatever camera you decide to use, you’ll need to ensure that you’ve set it up correctly. Your camera should always be at eye level and not above or below you. That prevents others from virtually looking down at you (awkward) or virtually looking up at you (also, awkward). That might mean that you need to put your laptop or desktop monitor on a stack of books so that it’s right at your eye-level. If that’s what it takes: do it. That immediate face-level interaction will help put others on your video calls at ease on a subconscious level. Your Lighting 1st Choice Lighting: Mactrem 6 inch Selfie Ring Light — about $18 Making sure that you are well-lit is essential, even with the most expensive camera system. This low-cost but super effective lighting setup comes with an adjustable tripod, which most lighting rigs do NOT include. That means, you can place it on the desk next to you, adjust it to the right height, and then illuminate your face from the correct angle. Even better, the lights come pre-programmed with three lighting modes: white, warm yellow, and warm white. Pick a mode, then pick a brightness setting, and away you go! There are other fun gadgets that come with this setup, but you may or may not need a cellphone holder or remote control. Your Lighting Setup Light from in front, never from behind. That means you should always avoid having a window or bright light directly behind you. Instead, ensure that the brightest light sources in the room are in front of you. This might mean you need to reorient your office. Do it! Why not have a makeover?! Balance your light. The lighting rig I’ve recommended allows you to dial up the brightness AND the color to your choosing. That means humans of any skin tone can reliably ensure that they look good on camera. Watch your light position. Just like with your camera, your lights should be at the same level as your face, not above and most definitely not below! That way, you’ll avoid having large shadows behind you. Your Sound Your “In-Ear” Purchase: Anker Liberty Air 2 Earbuds — about $100 Many of us use wired headphones on our computers and smartphones. Having one or more wires hanging off our heads might be fine if we’re listening to music or on a traditional phone call, but if we’re on video, it looks awkward and unprofessional. More importantly, being tethered to our computers with a wire prevents us from moving too far away from our computers. In my virtual classes, I almost always have my students get up out of their seats to do physical work with me, so being connected with a wire makes that impossible. While I use a pair of expensive Apple AirPods Pro to connect me to my computer, you can skip the luxury tax and grab a pair of well-reviewed and affordable wireless earbuds like Anker’s. At $100 new (or $80 used), these earbuds are easy to pair with your computer or smartphone, sound great and do NOT cost $240 like Apple’s AirPods Pro. Your “Over-Ear” Purchase: Plantronics BackBeat FIT 500 — about $99 Some of you absolutely cannot or will not use earbuds or in-ear stereo headphones for any of the usual reasons: they’re uncomfortable, don’t fit, etc. For those of you who seek an affordable over-the-ear set of wireless headphones, this pair by Plaintronics is very well-reviewed by The Wirecutter as well as having hundreds of positive ratings on Amazon. They’re sweat proof and should feel super comfy. Buy them new or refurbished (at $54) to save some extra funds! Your Sound Setup: Pairing. “Pairing” is the process of connecting your Bluetooth devices to your smartphone or computer. Both Plantronics BackBeat and the Anker Liberty Air 2 can be paired to two devices, which is awesome. That means you can use them to take calls on your smartphone AND use them for virtual meetings on your computer. However, just remember: always pair your new sound device to your computer FIRST.
https://medium.com/swlh/mastering-online-virtual-meetings-578c7012f22d
['David Koff']
2020-10-14 23:11:00.837000+00:00
['How To', 'Technology', 'Education', 'Community', 'Tech']
624
BZNT to be listed on BIBOX
Hello from Bezant Team, Bezant is pleased to announce our token’s first official listing on Bibox Exchange, a leading AI-enhanced global encrypted digital asset exchange. Bibox Exchange will list the BZNT Token on Thursday, July 5, 2018. The listing comes after a six figure strategic investment by Bibox Exchange announced last week. “Since Bezant raised US$27.88 million earlier in May, we are thrilled to announce another milestone — the listing of our BZNT Token on Bibox Exchange. Its platform is a top-tier global exchange ranked amongst the top ten in the world in terms of exchange volume, thus a great start for Bezant as our first official exchange, making our token globally accessible.” — Daesik Kim, Chief Cryptocurrency Officer of Bezant “Our team is aligned with Bezant’s vision of pushing the boundaries of blockchain’s role into mainstream business operations. We look forward to bringing the Bezant BZNT Token to Bibox Exchange and drive forward the blockchain and cryptocurrency industry together.” — Jeffrey Lei, CEO of Bibox Exchange The BZNT Token will begin trading on Bibox Exchange on Thursday, July 5, 2018. Bibox will also hold an airdrop event until July 10, 2018 for eligible traders, who will be ranked according to net holdings with the top BZNT holders receiving up to 30,000 BZNT. For more information on this exchange please visit their website https://www.bibox.com/
https://medium.com/bezant/bznt-to-be-listed-on-bibox-5c8ad4564a9f
[]
2018-08-31 14:46:56.656000+00:00
['Bitcoin', 'Bznt', 'Bezant', 'Blockchain', 'Technology']
625
Joins in Apache Spark — Part 3. Internals of the join operation in…
Internals of the join operation in spark Broadcast Hash Join When 1 of the dataframe is small enough to fit in the memory, it is broadcasted over to all the executors where the larger dataset resides and a hash join is performed. This has 2 phase, broadcast-> the smaller dataset is broadcasted across the executors in the cluster where the larger table is located. hash join-> A standard hash join is performed on each executor. There is no shuffling involved in this and can be much quicker. spark.sql.autoBroadcastJoinThreshold This can be configured to set the Maximum size in bytes for a dataframe to be broadcasted. -1 will disable broadcast join Default is 10485760 ie 10MB scala> spark.conf.get(“spark.sql.autoBroadcastJoinThreshold”) res0: String = 10485760 spark.broadcast.compress can be used to configure whether to compress the data before sending it. It uses the compression specified in the spark.io.compression.codec config and the default is lz4. We can use other compression codecs such as lz4,lzf, snappy, ZStandard. spark.broadcast.compress is true by default. Run explain to see if this type of join is used. scala> customer.join(payment,Seq(“customerId”)).queryExecution.executedPlan res13: org.apache.spark.sql.execution.SparkPlan = *(5) Project [customerId#6, name#7, paymentId#18, amount#20] +- *(5) SortMergeJoin [customerId#6], [customerId#19], Inner :- *(2) Sort [customerId#6 ASC NULLS FIRST], false, 0 : +- Exchange hashpartitioning(customerId#6, 200) : +- *(1) Project [_1#3 AS customerId#6, _2#4 AS name#7] : +- *(1) SerializeFromObject [assertnotnull(input[0, scala.Tuple2, true])._1 AS _1#3, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple2, true])._2, true, false) AS _2#4] : +- Scan[obj#2] +- *(4) Sort [customerId#19 ASC NULLS FIRST], false, 0 +- Exchange hashpartitioning(customerId#19, 200) +- *(3) Project [_1#14 AS paymentId#18, _2#15 … You can notice here that, even though my Dataframes are small in size sometimes spark doesn’t recognize that the size of the dataframe is < 10 MB. To enforce this we can use the broadcast hint . customer.join(broadcast(payment),Seq(“customerId”)) scala> customer.join(broadcast(payment),Seq(“customerId”)).queryExecution.executedPlan res12: org.apache.spark.sql.execution.SparkPlan = *(2) Project [customerId#6, name#7, paymentId#18, amount#20] +- *(2) BroadcastHashJoin [customerId#6], [customerId#19], Inner, BuildRight :- *(2) Project [_1#3 AS customerId#6, _2#4 AS name#7] : +- *(2) SerializeFromObject [assertnotnull(input[0, scala.Tuple2, true])._1 AS _1#3, staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, assertnotnull(input[0, scala.Tuple2, true])._2, true, false) AS _2#4] : +- Scan[obj#2] +- BroadcastExchange HashedRelationBroadcastMode(List(cast(input[1, int, false] as bigint))) +- *(1) Project [_1#14 AS paymentId#18, _2#15 AS customerId#19, _3#16 AS amount#20] +- *(1) SerializeFromObject [assertnotnull(input[0, scala.Tuple3, true])._1 AS _1#14, … If a broadcast hint is specified, the join side with the hint will be broadcasted irrespective of autoBroadcastJoinThreshold. If both sides of the join have the broadcast hints, the side with a smaller estimated physical size will be broadcasted. If there is no hint and if the estimated physical size of the table < autoBroadcastJoinThreshold then that table is broadcasted to all the executor nodes. Spark has a BitTorrent-like implementation to perform broadcast. This is to avoid the driver being the bottleneck when sending data to multiple executors. In this, the broadcasting table is serialized and divided into small chunks by the driver and is stored in the BlockManager of the driver. In each executor, the executor first tries to locate the object from its BlockManager, remote fetch smaller chunks of the object from the driver and/or others executors if they are available in the other executors’ BlockManager. Once an executor has the chunk, it puts the chunks in its BlockManager for other executors. Normally, BHJ can perform faster than other join algorithms when the broadcast side is small enough. However, broadcasting tables is a network-intensive operation and it can cause OOM sometimes or even perform worse than other algorithms when the table broadcasted is big enough. BHJ is not supported for a full outer join. For right outer join, the only left side table can be broadcasted and for other left joins only right table can be broadcasted. Shuffle Hash Join Shuffle hash join has 2 phases, 1) A shuffle phase, where the data from the join tables are partitioned based on the join key. What this phase does shuffles data across the partitions. The idea here is that if 2 tables have the same keys, they end up in the same partition so that the data required for joins is available in the same partition. 2) Hash join phase: The data on each partition performs a classic single node hash join algorithm. What Shuffle hash join does is that breaks apart 1 big join of 2 tables into smaller chunks of the localized relatively smaller branch. Shuffle is a very expensive operation as it involves a lot of network-intensive movement of the data across the nodes in the cluster. Also, note that the creation of Hash tables is also an expensive operation and is memory bound. This is not well suited for joins that wouldn’t fit in memory. Also, note that the performance of this is based on the distribution of keys in the dataset. The greater number of unique join keys the better data distribution we get. The maximum amount of parallelism that we can achieve is proportional to the number of unique keys . Say we are joining 2 datasets based on something which would be unique like empId would be a good candidate over something like DepartmentName which wouldn’t have a lot of unique keys and would limit the maximum parallelism that we could achieve. Gotchas: We need to be cognizant about the situation in which 1 single partition(a small subset of all the partitions) getting way too much of data over the other partition. This can happen when there is a skew in our dataset. The distribution of the data should be uniform across the cluster. Say we are joining Stock data on companies stock ticker and each partition gets data based on the company’s name. But there might not be a whole lot of data in the partition that house stocks of Z but there might be way too much of data on the partition that would house A such as Apple and Amazon. We would have to do something called as salting in these kinds of situations. Salting is a technique of adding some randomness in non-unique keys. Say we have 1 Million Amazon stocks in our dataset and Zillow stocks are 100. We selectively add a random integer to the Amazon stock so that the hash of the Amazon data is further uniformly distributed among different partitions. The main thing to note here is that Shuffle Hash join will be used as the join strategy only when spark.sql.join.preferSortMergeJoin is set to false and the cost to build a hash map is less than sorting the data. By default, sort merge join is preffered over Shuffle Hash Join. ShuffledHashJoin is still useful when : 1) any partition of the build side could fit in memory 2)one table is much smaller than the other one, then cost to build a hash table on a smaller table is smaller than sorting the larger table. Sort merge join Sort merge join is the default join strategy if the matching join keys are sortable and not eligible for broadcast join or shuffle hash join. It is a very scalable approach and performs better than other joins most of the times. It has its traits from the legendary map-reduce programs. What makes it scalable is that it can spill the data to the disk and doesn’t require the entire data to fit inside the memory. It has 3 phases: 1)Shuffle Phase: The 2 large tables are repartitioned as per the join keys across the partitions in the cluster. 2)Sort Phase: Sort the data within each partition parallelly. 3)Merge Phase: Join the 2 sorted + partitioned data. This is basically merging of the dataset by iterating over the elements and joining the rows having the same value for the join key. Also, note that sometimes spark by default chooses Merge Sort Join which might not be always ideal. Let's say we have a customer Dataframe and a payment dataframe. scala> println(SizeEstimator.estimate(payment)/1000000.00) 40.329272 scala> println(SizeEstimator.estimate(customer)/1000000.00) 40.260272 Both of the data frames are almost 40MB. By default, spark chooses Sort-Merge Join but sometimes a Broadcast join can do better. scala>customer.join(payment,"customerId").queryExecution.executedPlan res28: org.apache.spark.sql.execution.SparkPlan = *(5) Project [customerId#49, name#50, paymentId#37, amount#39] +- *(5) SortMergeJoin [customerId#49], [customerId#38], Inner scala> spark.time(customer.join(payment,”customerId”).show) Time taken: 693 ms When we provide the broadcast hint, scala>customer.join(broadcast(payment),"customerId").queryExecution.executedPlan res29: org.apache.spark.sql.execution.SparkPlan = *(2) Project [customerId#49, name#50, paymentId#37, amount#39] +- *(2) BroadcastHashJoin [customerId#49], [customerId#38], Inner, BuildRight scala>spark.time(customer.join(broadcast(payment),"customerId").show) Time taken: 253 ms We have to remember that there is no single right answer to a problem. The right answer is always it depends . We have to utilize spark’s features generously as per our data. Note: SizeEstimator.estimate gives an estimate of how much space the object takes up on the JVM heap. This will often be higher than the typical serialized size of the object. Gotchas: For Ideal performance of Sort-Merge join, it is important that all rows having the same value for the join key are available in the same partition. This warrants for the infamous partition exchange(shuffle) between executors. Collocated partitions can avoid unnecessary data shuffle. Data needs to be evenly distributed n the join keys. The number of join keys is unique enough so that they can be equally distributed across the cluster to achieve the max parallelism from the available partitions. Cartesian Join When no join keys are specified and the join type is an inner join, CartesianProduct is chosen. This is an inherently very expensive operation and should not be chosen unless necessary. More on this here. BroadcastNestedLoopJoin This type of join is chosen when no joining keys are specified and either has a broadcast hint or if 1 side of the join could be broadcasted and is less than spark.sql.autoBroadcastJoinThreshold. This type of join is also the final fall back option if no join keys are specified, and is not an inner join. But this could be very slow and can cause OutOfMemoryExceptions! if the broadcasted dataset is large enough. Key Takeaways: Sort-Merge join is the default join and performs well in most of the scenarios. And for cases, if you are confident enough that Shuffle Hash join is better than Sort-Merge join, disable Sort-Merge join for those scenarios. However, when the build size is smaller than the stream size, Shuffle Hash join will do better than Sort-Merge join. Tune the spark.sql.autoBroadcastJoinThreshold accordingly if deemed necessary. Try to use Broadcast joins wherever possible and filter out the irrelevant rows to the join key before the join to avoid unnecessary data shuffling. Joins without unique join keys or no join keys can often be very expensive and should be avoided. The Part 1, and Part 2 are available here which covers the basics of Join operation. Thanks for reading! Please do share the article, if you liked it. Any comments or suggestions are welcome! Check out my other articles, Intro to Spark and Spark 2.4.
https://medium.com/@achilleus/https-medium-com-joins-in-apache-spark-part-3-1d40c1e51e1c
[]
2019-03-13 01:19:03.315000+00:00
['Technology', 'Programming', 'Scala', 'Data', 'Apache Spark']
626
Electronic Records in Present-Day Healthcare System
At present the healthcare system witnesses transmission of patients’ medical records from paper to electronic. After the US adopted the mandated switch to electronic records, they received extensive news coverage both in medical and mainstream publications. The digitalization era has given birth to a number of terms, such as electronic health records (EHRs) and electronic medical records (EMRs) that stand in the foreground of such publications, used sometimes interchangeable. However, there are distinct differences between them — as well as between other newly coined terms that describe different approaches to digitalization of the medical life. Electronic Health Records? An electronic health record (EHR) is a digital version of a patient chart, an inclusive snapshot of the patient’s medical history. It contains input from all the practitioners that are involved in the client’s care, offering a comprehensive view of the client’s health and treatment history. Electronic health records are designed to be shared with other providers and authorized users may instantly access a patient’s EHR from across different healthcare providers. Elements of EHRs As a rule, EHRs contain the following data: Patient’s demographic, billing, and insurance information; Physical history and physicians’ orders; Medication allergy information; Nursing assessments, notes, and graphics of vital signs; Laboratory and radiology results; Trending labs, vital signs, results, and activities pages for easy reference Links to important clinical information and support Reports for quality and safety personnel Electronic Medical Records An electronic medical record (EMR) is a digital version of a patient’s chart used by a single practice: a physician, nurse practitioner, specialist, dentist, surgeon or clinic. In its essence, it is digitalized chart that healthcare facilities previously used to keep track of treatments, medications, changes in condition, etc. These medical documents are private and confidential and are not usually shared outside the medical practice where they originated. Electronic medical records make it easier to track data over time and to monitor the client’s health more reliably, which leads to better long-term care. Elements of EMRs: EMRs usually contain the following information about the client: Medical history, physicals, notes by providers, and consults from other physicians Medications and allergies, including immunization history Alerts to the office and the patients for preventative tests and/or procedures, e.g. lab tests to follow-up colonoscopies Personal Health Records An electronic personal health record (PHR) provides an electronic record of the client’s health-related information and is managed by the client. It is a universally accessible and comprehensible tool for managing health information, promoting health maintenance, and assisting with chronic disease management. A PHR may contain information from multiple sources such as physicians, home monitoring devices, wearables, and other data furnished by the client. With PHRs, each client can view and control their medical data in a secure setting and share it with other parties. However, it is not a legal record unless so defined and is subject to various legal limitations. Besides, though PHRs can provide important insights and give a fuller view of the client’s health and lifestyle, its inaccuracy and lack of structure lead to limited use of it in the clinical and medical studies. Benefits of Electronic Records Offer Digital medical records may offer significant advantages both to patients and healthcare providers: Medical errors are reduced and healthcare is improved thanks to accurate and up-to-date information; Patient charts are more complete and clear — without the need to decipher illegible scribbles; Information sharing reduces duplicate testing; Improved information access makes prescribing medication safer and more reliable;Promoting patient participation can encourage healthier lifestyles; More complete information improves diagnostics; Facilitating communication between the practitioner and client; Enabling secure sharing of client’s medical information among multiple providers; Increasing administrative efficiency in scheduling, billing, and collections, resulting in lower business-related costs for the organization So where is AI? Electronic records are expected to make healthcare more efficient and less costly. However, in reality, under less-than-ideal circumstances, workarounds and errors of different types appear and complaints mount. Improving EHR/EMR design and handling requires mapping complaints to specific EHR/EMR features and design decisions, which is not always a straightforward process. Over the last year, more informatics researchers and software vendors have turned their attention to EHR/EMR systems, and more of them have started to rely on AI to give deeper insights into the design and handling of the electronic records. So far AI is used to assist medical professionals with electronic records flow in the following spheres: Data extraction from free text The free structure of clinical notes is notoriously difficult to read and categorize with straightforward algorithms. AI and natural language processing, however, can handle the heterogeneity of unstructured or semistructured data making them a useful part of EHRs. At present, healthcare providers can extract data from faxes at OneMedical, or by using Athena Health’s EHR. Apart from them, Flatiron Health’s human “abstractors” review provider uses AI to recognize key terms and uncover insights from unstructured documents. Amazon Web Services recently announced a cloud-based service that uses AI to extract and index data from clinical notes. Data collection from multiple sources As healthcare costs grow and new methods are tested, home devices such as glucometers or blood pressure cuffs that automatically measure and send the results to the EHR are gaining momentum. Moreover, data streams from the Internet of Things, including home monitors, wearables, and bedside medical devices, can auto-populate notes and provide data for predictive analytics. Some companies have even more advanced devices such as the smart t-shirts of Hexoskin, which can measure several cardiovascular metrics and are being used in clinical studies and at-home disease monitoring. This means, that future EHRs should integrate telehealth technologies. Besides, electronic patient-reported outcomes and personal health records are also being leveraged more and more as providers emphasize the importance of patient-centered care and self disease management; all of these data sources are most useful when they can be integrated into the existing EHR. Clinical documentation and data entry EHR documentation is one of the most time-consuming and irritating tasks in the modern care environment. A recent AMA study found that clinicians spend twice as much time over the keyboard as they do talking to their patients. Artificial intelligence with the help of NLP can automatically assemble and repackage the necessary components of clinical documentation to build clinical notes that accurately reflect a patient encounter or diagnosis. Nuance, for example, offers AI-supported tools that integrate with commercial EHRs to support data collection and clinical note composition. Such carefully engineered integration of AI into the note creation process would not only reduce the rummaging through bins of pieces, but could improve the output design, making clinical notes more useful, readable, and cogent and meeting all requirements for clinical documentation.” Clinical decision support Decision support, which recommends treatment strategies, used to be generic and rule-based. AI machine-learning solutions are emerging today from vendors including IBM Watson, Change Healthcare, or AllScripts that learn based on new data and enable more personalized care. For instance, Google is developing prediction models from big data to warn clinicians of high-risk conditions such as sepsis and heart failure. Enlitic, and a variety of startups are developing AI-derived image interpretation algorithms. Jvion offers a “clinical success machine” that identifies patients most at risk as well as those most likely to respond to treatment protocols. Each of these systems could be integrated into EHRs to provide decision support. Interoperability AI can address the core interoperability issues that have made it so difficult for providers to access and share information with the current generation of health IT tools. The industry is still struggling to overcome the challenges of proprietary standards, data silos, privacy concerns, and the lingering competitive disadvantages of sharing data too freely. With AI algorithms learning from inter-specialty communication specifics and facilitating shared decision making by mining patient input and feedback, the final clinical note will be the optimal product for the user in line with the interdisciplinary care concept.
https://medium.com/sciforce/electronic-records-in-present-day-healthcare-system-2d6649646aaa
[]
2020-02-27 16:22:03.602000+00:00
['Healthcare', 'Machine Learning', 'Health Technology', 'Artificial Intelligence', 'NLP']
627
Joyful Exercises for Contributing to Low-Fat, Lean Muscles, and Dense Bones
Joyful Exercises for Contributing to Low-Fat, Lean Muscles, and Dense Bones Fitness is a passion for me. I do it as a ritual at home nowadays. I used to go to the gym and love the rituals. However, nowadays, it is difficult for me to go to the gym. Not going to the gym does not mean giving up fitness goals. I created a customised gym at home for my specific needs. I perform a wide variety of workout regimes. As I get older, the type of exercises changed a lot. I used to do a lot of cardio when I was younger. My main focus is weight training, resistance training, callisthenics, high-intensity interval training (HIIT), and mild cardio. The purpose of this post to share with you three joyful exercises I perform a daily basis to keep my lean muscles, bones density, and low-fat percentage. Even though I passionately work out, my approach to exercise is gentle. For example, instead of using too heavy dumbbells, I prefer using my body weight. It is natural and produces the required outcomes for me. Let me introduce you the three simple exercise I do almost every day to maintain my low-body fat percentage, lean muscles, and dense bones. Apart from a few hundred push-ups, I use the pull-up machine every day. Here are the three joyful exercises; enjoy.
https://medium.com/illumination-curated/joyful-exercises-for-contributing-to-low-fat-lean-muscles-and-dense-bones-18a9d7614afc
['Dr Mehmet Yildiz']
2020-12-28 16:52:20.670000+00:00
['Self Improvement', 'Fitness', 'Writing', 'Health', 'Technology']
628
Key advantages of blockchain technology
Blockchain is not in need of any introduction, in the recent years there has been no other technology that has managed to garner the attention of the investors, moguls in the business, startups and even employees alike. It has managed to capture the attention and interest of the people. It is primarily because this technology is growing at a breakneck pace. In fact, as per LinkedIn, Blockchain is one of the leading skills that was in great demand in 2019. This technology is expected to expand its market growth at a CAGR of 67.3% between 2016 and 2025. So, what makes Blockchain so popular, let’s explore this ahead in the blog. Benefits of Blockchain technology: 1. A safe and secure system- Digital transformation has brought the world on one platform where they can interact virtually and perform exchange of data and money. However, this system is not completely infallible. In fact, the rising number of data breach cases has made us think about whether we should actually trust the conventional platforms. To elude all this, we have Blockchain technology that helps in overcoming the hurdles of the virtual platforms. Blockchain provides a DLT platform where all the data is time-stamped and encrypted cryptographically, which ensures that the data is safe. 2. Decentralization- The conventional system is is a centralized one, it means that any failure at the central server may disrupt the entire system, and hence it affects the functioning of the system. Resultantly impacts the productivity of the business. Thus there is a need for a platform that can overcome this problem. Blockchain plays a key role in this sense; it is a decentralized technology the possibility of system failure reduces, even if one of the nodes stops working, the other nodes take up the charge. Thus making it the best choice for your work. 3. Peer-to-peer network- Blockchain works on peer-to-peer interaction. It means that the two connecting parties directly interact with each other. You don’t really have to wait for third-party validation and approval. With the dependency of a third party, the cost increases, and it also delays the process. However, with Blockchain intervention, this possibility decreases. One the buyer and seller can directly interact and make payment. 4. Smart contracts- One of the best things about Blockchain is smart contracts. These digitized pre-programmed contracts are auto executed once the conditions mentioned in it are met. This smart contract finds use across different industrial segments. It improves the payment process and also refutes all the unwanted delays that exist in the system. These are a few of the advantages of Blockchain, but the biggest one being that it helps in making the system infallible and free from errors. Its omnipresence across the different industrial segments defines how potential is this technology. The future For those who are willing to make a career in Blockchain or want to become a trained Blockchain professional, Blockchain Council is offering this program. The Blockchain courses offered by Blockchain Council is the best way to delve deeper into this technology while also learning the practical applications. So, connect with Blockchain Council today.
https://medium.com/@sophiacasey008/key-advantages-of-blockchain-technology-32aaa5a4b825
['Sophia Casey']
2020-12-01 07:32:23.102000+00:00
['Blockchain Startup', 'Blockchain', 'Blockchain Technology', 'Blockchain Development', 'Blockchain Course']
629
Should blockchain software be openly available?
By Olga Grinina The first generation of blockchains stems from the Bitcoin blockchain, the ledger underpinning the decentralized, peer-to-peer cryptocurrency that has gone from Slashdot miscellanea to a mainstream topic. Along the way, many more variations of the Bitcoin codebase have appeared. Some started as proposed extensions to Bitcoin, such as the Zerocash protocol, which aimed to provide transaction anonymity and fungibility but was eventually spun off into its own currency, Zcash. While Zcash has brought its own innovations, using recent cryptographic advances known as zero-knowledge proofs, it maintains compatibility with the vast majority of the Bitcoin code base, meaning it too can benefit from upstream Bitcoin innovations. This blockchain is a distributed ledger that keeps track of all users’ transactions to prevent them from double-spending their coins (a task historically entrusted to third parties: banks). To prevent attackers from gaming the system, the ledger is replicated to every computer participating in the Bitcoin network and can be updated by only one computer in the network at a time. To decide which computer earns the right to update the ledger, the system organizes every 10 minutes a race between the computers, which costs them (a lot of) energy to enter. The winner wins the right to commit the last 10 minutes of transactions to the ledger (the “block” in blockchain) and some Bitcoin as a reward for their efforts. This setup is called a proof of work consensus mechanism. With the words “smart contract” in the air and a proved — if still slow — technology to run them, another idea came to fruition: permissioned blockchains. So far, all the blockchain networks we’ve described have had two unsaid characteristics: They are public (anyone can see them function), and they are without permission (anyone can join them). These two aspects are both desirable and necessary to run a distributed, non-third-party-based currency. As blockchains were being considered more and more separately from cryptocurrencies, it started to make sense to consider them in some private, permissioned settings. A consortium-type group of actors that have business relationships but don’t necessarily trust each other fully can benefit from these types of blockchains — for example, actors along a logistics chain, financial or insurance institutions that regularly do bilateral settlements or use a clearinghouse, idem for healthcare institutions. Once you change the setting from “anyone can join” to “invitation-only,” further changes and tweaks to the blockchain building blocks become possible, yielding interesting results for some. For a start, proof of work, designed to protect the network from malicious and spammy actors, can be replaced by something simpler and less resource-hungry, such as a Raft-based consensus protocol. A tradeoff appears between a high level of security or faster speed, embodied by the option of simpler consensus algorithms. This is highly desirable to many groups, as they can trade some cryptography-based assurance for assurance based on other means — legal relationships, for instance — and avoid the energy-hungry arms race that proof of work often leads to. This is another area where innovation is ongoing, with Proof of Stake a notable contender for the public network consensus mechanism of choice. It would likely also find its way to permissioned networks too. In all of the cases so far, one thing is clear: The goal of using a blockchain is to raise the level of trust participants have in the network and the data it produces — ideally, enough to be able to use it as is, without further work. Reaching this level of trust is possible only if the software that powers the network is free and open source. Even a correctly distributed proprietary blockchain is essentially a collection of independent agents running the same third party’s code. By nature, it’s necessary — but not sufficient — for a blockchain source code to be open source. This has both been a minimum guarantee and the source of further innovation as the ecosystem keeps growing. Finally, it is worth mentioning that while the open nature of blockchains has been a source of innovation and variation, it has also been seen as a form of governance: governance by code, where users are expected to run whichever specific version of the code contains a function or approach they think the whole network should embrace. When it comes to practical application at the current market, decentralized protocols are taking very different stances in regard to sharing their code. Many still have it open on Github, while others for some reason are taking more precautious approach sharing merely a source code for their native token. Everyone is, of course, entitled to their own views on the value of patents and whether their company should file for them. But regardless of your position, we, as a community, must acknowledge that there are others in this world who are obtaining blockchain patents purely for their own profit motives. For example, Erich Spangenberg of IPwe has recently stated publicly that the realm is ‘a curious path how a collection of misfit trolls, geeks and wonks ended up here — but we are going to crush it and make a fortune…’
https://medium.com/revain/should-blockchain-software-be-openly-available-8f38d281aef7
[]
2018-10-01 09:00:31.112000+00:00
['Bitcoin', 'Blockchain Technology']
630
How To Inspect Your Local Network
How To Inspect Your Local Network A practical guide with examples I must concede, spying on a network (and everything and everybody on it) is just candid fun. Imagine, silently typing on your keyboard, exploring a network, examining what things are there and what they are up to. How would this not be equally intriguing as reading a mystery novel? Photo by Artur Rutkowski on Unsplash But do you know how to scan your local network How to find its vulnerabilities? So few of us know how to do this. How do you get a list of every open port on your computer? Or of every connection to the WiFi? This topic is often discussed on a theoretical level, but very rarely with a practical approach. That is why all talks on cybersecurity and hacking are always full to the last seat. We want to know about this, we want to learn. And yes, we absolutely do want to spy on everybody 😎 The following guide describes how to inspect your network in Linux. We will start with ourselves First, we have to figure out where we are, do we have a network connection? How many do we have? To get a list of currently active network connections, we will use the very convenient ip command. ip has been replacing the ifconfig command since iproute2 tools became available. ip addr show will show us everything that has an IP address, a MAC address or is pretending to have one (i.e. veth — Virtual Ethernet Device). To make the output more readable, we’ll add -c (=use colour). For ease of use, I suggest adding the alias ip=ip -c to your environment. How to read the above? Immediately, you can see 4 sections: lo , wlp58s0 , docker-0 and br-9ad7ea412ea4 . lo stands for localhost (archaically called loopback). Most of you probably already know, but let’s state it clearly nevertheless: this address is used for internal testing of network services. It is implemented entirely within your computer and is not accessible from the outside (from other computers, devices, the internet). From the 3rd line of its description ( inet 127.0.0.1/8 ), we can see that its IP address 127.0.0.1 , just as expected. wl... stands for wireless LAN, this is our wireless connection to our immediate network. In the 3rd line ( inet 10.0.0.8/24 ) we can see that its IP address is 10.0.0.8 . 10.0.0.8 is an interesting address, it is the public IP address of our computer. Other devices on our network see our computer as the host 10.0.0.8 . But because the IP starts with 10.- we also know that this is a private network, our computer is not accessible from the internet. Which IPs are private again and which public? The IETF’s standard (document RFC-1918) explains that IP addresses 10.0.0.0 - 10.255.255.255 are reserved for the private IP space and cannot be routable on the global internet. (Together with a few addresses in the 172.- and 192.- blocks) wl... appears only if you have a wireless interface or adapter. If your connection to a network is via a cable, then you are looking for a connection called eth... . If there are several wired or wireless interfaces available, all will be listed. In the inet6... line of each section we can see the various MAC addresses of each device. To see only the list of MAC addresses, run ip -c link show . What role do Docker containers play? Did you know or, better, did you pay attention, to the detail that every Docker install comes with a bridge network? This is what the above docker0 is referencing. As long as no containers are running, the bridge docker0 ’s status is DOWN . What happens if I now start a few Docker containers and run ip -c addr show again? I get a veth.. section for every container. veth s are virtual ethernet connections. They always come in pairs, one is created in the localhost namespace, and the other in the namespace of the container’s network. What are we showing? The next step in our “Practical guide with examples” is to figure out what services we are exposing to others on this network. We will use nmap . nmap is a great tool for network exploration and security auditing. It is also open-source and very straightforward to use. nmap researches which hosts are available on the network, what services those hosts offer, what OS they are running, what type of packet filtering/firewalls are in use, … BE CAREFUL! Scanning random servers can get you in trouble! From the above, we learned that our computer has 2 IP addresses, the localhost 127.0.0.1 and the network address 10.0.0.8 . Both of these represent our computer. Everything on localhost is visible only from our computer and everything on 10.0.0.8 is visible to every device on this network. The good news is that these 2 IPs do not overlap by default. What is accessible on 127.0.0.1 is not by default accessible on 10.0.0.8 . We have to put in extra effort to make something from our localhost visible to the network. But the bad news is that sometimes we are running services, which by default ARE visible to everybody on our network. 😨 And let us be honest, 99.99% of us are not sure which services these are. Thus, let's ask the computer which ports are open. What is hooked to localhost? nmap is truly simple to use. It does have lots of settings, but for starters (because we are scanning localhost and not spamming a random server) it is enough if we just give it an IP address. Just short note, scanning TCP ports ( -sT ) is much faster than scanning UTP ports, and checking the status of just 1 port ( -p 80 ) is vastly faster than scanning all ports. Because ..localhost, we’ll scan everything: $ nmap 127.0.0.1 Starting Nmap 7.60 ( https://nmap.org ) at ... Nmap scan report for localhost (127.0.0.1) Host is up (0.00020s latency). Not shown: 992 closed ports PORT STATE SERVICE 139/tcp open netbios-ssn 445/tcp open microsoft-ds 631/tcp open ipp 1010/tcp open surf 1111/tcp open lmsocialserver 1500/tcp open vlsi-lm 3306/tcp open mysql 5432/tcp open postgres 4000/tcp open remoteanything Ok, how does one read this? Is this list too long, too short, just right? It is localhost, after all, it is not accessible to others. Localhost or not, the rule with ports is very simple: Keep all ports closed, except the ones you are using. I went through the above list and instantly understood all but the first 3 ports: To learn more about these 3 ports, we can use -A to enable OS detection, version detections, script scanning and traceroute and we can limit the scan to only 1 port with -p 631 . $ nmap 127.0.0.1 -A -p 139 Nmap scan report for localhost (127.0.0.1) Host is up (0.000089s latency). Starting Nmap 7.60 ( https://nmap.org ) at ...Nmap scan report for localhost (127.0.0.1)Host is up (0.000089s latency). PORT STATE SERVICE VERSION 139/tcp open netbios-ssn Samba 4.10-Ubuntu (workgroup: ABCD) Service Info: Host: MON Host script results: |_nbstat: NetBIOS name: MON, .... | smb-os-discovery: | OS: Windows ...... |_ System time: .... | smb-security-mode: | account_used: ... | authentication_level: ... |_ message_signing: .... ------------------snip------------------------------------ This is one way of figuring out more about these ports, a far more efficient way is to google them all 😄. After googling the other 3, I discovered that: 631 is used to communicate with the printer. This makes sense since there is a printer right next to me 😌. 445 is used for a Microsoft service for file sharing. And it appears that this is a horrible port to have open 😱. For a long time this port has been known as a horrendous security hole: “As you might imagine, malicious hackers have been having a field day scanning for port 445, then easily and remotely commandeering Windows machines.” — Gibson Research Corporation 139 is also used for file sharing to Windows computers. Apparently, it is not that terrifying to have this one open. I just created some work for myself. I will have to deal with 445 and 139. Soon. There is a reason why computer administration is a full-time job. But, I will definitely check who else has 445 open at work 🍹. (later that week): Ha, 12 other hosts have 445 open. I guess I will have to share the knowledge and offer to explain the danger. Again, more work. Who said knowledge is power? Knowledge is mostly work. What are we sharing with everybody on the network? After the 445 gave us a scare, we might have brushed it off thinking, “It is only localhost, localhost ports are most probably not really dangerous, .. I think. They are at least benign enough for me to procrastinate on closing them for a few days/months. It is the public ports, which I will concentrate on”. So let’s see, what is visible to other devices on our network. $ nmap 10.0.0.8 Starting Nmap 7.60 ( https://nmap.org ) at ... Nmap scan report for Mon (10.0.0.8) Host is up (0.00020s latency). Not shown: 996 closed ports PORT STATE SERVICE 139/tcp open netbios-ssn 445/tcp open microsoft-ds 1500/tcp open vlsi-lm 3306/tcp open mysql 8080/tcp open http-proxy 8888/tcp open sun-answerbook 9000/tcp open cslistener Uh-oh, 💀 the dreaded 139 and 445 are open… Which makes sense, since they are meant for file sharing between computers, not between this computer and itself. This will need my attention ASAP. But first, 😉 why are all the Docker ports visible to everybody? Ah… this is turning into a much bigger mess than anticipated. After some googling, I learnt that by default Docker exposes ports to the IP address 0.0.0.0 and not 127.0.0.1 . The 0.0.0.0 address is a placeholder address, its meaning is similar to that of * in a regex. Saying “Expose 0.0.0.0:30 ” means expose the port 30 on all IP addresses, thus also our public 10.0.0.8 . Each time you mapped a Docker port to an outside port, you were making this port publicly accessible. Great! So, to remedy this, I might want to update the docker-compose.yml files by explicitly binding ports to IPs. Once this is done, nmap no longer shows Docker ports among the publicly open ports. Perfect, we finally have all ports accounted for. Now we can move on to … Who else is here? For starters, let’s check which WiFi network we are connected to. Let’s run nmcli dev wifi : nmcli will display a brief summary of all available wifi access points (APs). It can also be used to connect to a selected network ( nmcli dev wifi connect MANET-3 ). Given that we have 3 wifi networks, we should definitely scan all 3 of them. But for starters, let’s see what is on MANET-2 . We will use nmap . To see which other hosts are “up”/“live” on our network, we need to know our IP address and our subnet mask. From running the ip command, we got both these details: 10.0.0.8/24 . A quick reminder of what a subnet mask is. The IPs of all hosts in the same network start with the same bits, only the last bits are different. Our subnet mask /24 means that the first 24 bits will be the same. Given our IP 10.0.0.8/24 , we know that all hosts in our network will be between 10.0.0.1 and 10.0.254 ( 10.0.0.0 is the network address and 10.0.0.255 is used for broadcasting). To quickly see all devices on my network, run nmap with the option -sn . This means nmap will no scan the ports, it will just return a list of “live” hosts: $ nmap -sn 10.0.0.8/24 Starting Nmap 7.60 ( https://nmap.org ) at ... Nmap scan report for _gateway (10.0.0.1) Nmap scan report for 10.0.0.3 Nmap scan report for 10.0.0.4 Nmap scan report for 10.0.0.5 Nmap scan report for MON (10.0.0.8) Nmap scan report for 10.0.0.9 Nmap done: 256 IP addresses (6 hosts up) scanned in 15.31 seconds There are 6 active hosts in our network. 10.0.0.1 is the gateway, the router connecting our private network to the internet. 10.0.0.8 is us (our computer’s name is MON). But who are the others? To learn more about them we can run nmap with -sV , to see which services are listening on which ports, , to see which services are listening on which ports, -sS , for the quickest and relatively stealthy scan, , for the quickest and relatively stealthy scan, -O , to enable OS detection, , to enable OS detection, -A , to enable OS detection and other features , to enable OS detection and other features -p 80 , to check only the port 80 , to check only the port 80 -sT , to scan only TCP ports (scanning TCP ports is much quicker than scanning UTP ports). Investigating host 10.0.0.3 reveals: $ sudo nmap -sS 10.0.0.3 -A Starting Nmap 7.60 ( https://nmap.org ) at ... Nmap scan report for 10.0.0.3 PORT STATE SERVICE VERSION 80/tcp open http GoAhead WebServer |_http-server-header: GoAhead-Webs | http-title: Range Extender |_Requested resource was http://10.0.0.3/login.asp MAC Address: 09:0A:XX:XX:XX (Tenda Technology) Device type: general purpose Running: Wind River VxWorks OS CPE: cpe:/o:windriver:vxworks OS details: VxWorks Network Distance: 1 hop Nmap done: 1 IP address (1 host up) scanned in 20.60 seconds Look at the line | http-title: Range Extender and OS details: VxWorks . This host is just a WiFi range extender. A quick google search of OS VxWorks reveals a pretty awesome Wiki page explaining that this OS is used for embedded systems, that the Mars rovers are using it, as well as the ASIMO robot and a bunch of auto manufacturers. And here I am using it to merely extend my WiFi range. This is the technology that went to Mars and I paid 40€ for the hardware, software, transportation, design,… Feels like uncovering a small jewel. Investigating host 10.0.0.4 reveals: $ nmap -sS 10.0.0.4 -A Starting Nmap 7.60 ( https://nmap.org ) at ... Nmap scan report for 10.0.0.4 PORT STATE SERVICE VERSION 80/tcp open http nginx 8008/tcp open http? 8009/tcp open ssl/ajp13? | ssl-cert: Subject: commonName=XXXXXXXXXXXXXXXXXXXXX 8443/tcp open ssl/https-alt? 9000/tcp open ssl/cslistener? MAC Address: XXXXXXXXX (Hon Hai Precision Ind.) Device type: firewall Running (JUST GUESSING): Fortinet embedded (87%) OS CPE: cpe:/h:fortinet:fortigate_100d Aggressive OS guesses: Fortinet FortiGate 100D firewall (87%) No exact OS matches for host (test conditions non-ideal). Nmap done: 1 IP address (1 host up) scanned in 187.96 seconds This is an interesting device, it runs a full-fledged Nginx web server. And it seems to have been built by Hon Hai Precision Ind., a Taiwanese electronics contract manufacturer. Since they are a contractor, we are no closer to figuring out what device this is. The Fortinet FortiGate is a professional firewall. 3 ports are protected by SSL (the connections are encrypted). Not knowing what this device is, is actually a happy surprise. Some people have put at least some effort into making sure this device demands a bit of know-how and effort to crack. After checking manually, it turns out it is the TV. Checking 10.0.0.5 was an enigma. nmap couldn’t figure out anything. I ran it with all kinds of settings, I’ve tried out all kinds of approaches, but the main problem was that this device had all ports closed. Everything. Each and every one of them. It should come as no surprise that a port-scanning tool isn’t good at figuring out a device, which has all its ports shut. 😅 I checked manually and it was an Android smartphone. Last one to go! What is 10.0.0.9 ? $ nmap 10.0.0.9 -sT Nmap scan report for 10.0.0.6 PORT STATE SERVICE 62078/tcp open iphone-sync Starting Nmap 7.60 ( https://nmap.org )...Nmap scan report for 10.0.0.6PORT STATE SERVICE62078/tcp open iphone-sync Nmap done: 1 IP address (1 host up) scanned in 40.79 seconds What a surprise, this device practically introduced itself. It appears to be an iPhone, which apparently likes to sync something via this port. Googling this port proved to be very Apple-like: lots of rumours lots of guesses, no official explanation. From what is written about this port, it seems to be used by iTunes for data-syncing on a WiFi network. I do wonder how often this syncing is happening and if the owner of this phone, knows about it. Try nmap today With just a few nmap commands I was able to learn a great amount about the devices on my network. Pretty good, considering I just learned about this tool recently. Now it is your turn, check your devices and your networks and try not to get in trouble. External sources
https://medium.com/swlh/how-to-inspect-your-local-network-4187d7ae3b10
['Ines Panker']
2019-10-01 20:51:02.604000+00:00
['Software Development', 'Nmap', 'Networking', 'Technology', 'Cybersecurity']
631
How India revolutionized its voting system through Electronic Voting Machines
Innovations in technology have rapidly transformed democracies in the last two decades. One area where digital technology’s relevance and impact has often been debated is the use of Electronic Voting Machines (EVMs) in the choice of political leaders. Compared to a pencil-and-paper system, technology has tremendous potential to empower citizens, amplify their voices, and allow citizens to hold governments accountable. The Design and Make of EVMs First conceived in 1977 in the Election Commission of India, the Electronics Corporation of India Ltd. (ECIL), Hyderabad was assigned the task to design and develop it. In 1979 a prototype was developed, which was demonstrated by the Election Commission before the representatives of political parties on 6th August, 1980. The Bharat Electronic Ltd. (BEL), Bangalore, another public-sector undertaking, was co-opted along with ECIL to manufacture EVMs once a broad consensus was reached on its introduction. An EVM is designed with two units: the control unit and the balloting unit. These units are joined together by a cable. The control unit of the EVM is kept with the presiding officer or the polling officer. The balloting unit is kept within the voting compartment for electors to cast their votes. This is done to ensure that the polling officer verifies your identity. With the EVM, instead of issuing a ballot paper, the polling officer will press the Ballot Button which enables the voter to cast their vote. A list of candidates names and/or symbols will be available on the machine with a blue button next to it. The voter can press the button next to the candidate’s name they wish to vote for. Political fate of the machine First time use of EVMs occurred in the general election in Kerala in May, 1982; however, the absence of a specific law prescribing its use led to the Supreme Court striking down that election. Subsequently, in 1989, the Parliament amended the Representation of the People Act, 1951 to create a provision for the use of EVMs in the elections (chapter 3). A general consensus on its introduction could be reached only in 1998 and these were used in 25 Legislative Assembly constituencies spread across three states of Madhya Pradesh, Rajasthan and Delhi. Its use was further expanded in 1999 to 45 Parliamentary Constituencies and later, in February 2000, to 45 Assembly Constituencies of the Haryana Assembly elections. In the State Assembly elections, held in May 2001, in the states of Tamil Nadu, Kerala, Puducherry and West Bengal, the EVMs were used in all the Assembly Constituencies. Since then, for every State Assembly election, the Commission has used the EVMs. In 2004, the General Election to the Lok Sabha, the EVMs (more than one million) were used in all 543 Parliamentary Constituencies in the country. It runs on a power pack (Battery) having 7.5 volts. In case of M3 EVM, power packs are inserted in 5th, 9th, 13th, 17th & 21st Balloting Units, if more than 4 BUs are connected to a Control Unit. On the right side of the BU along the candidates’ vote button, digits 1 to 16 are embossed in Braille signage for guidance of visually impaired electors. In a meeting of all political parties held on 4th October, 2010, the parties expressed satisfaction with the EVM but some parties requested the Commission to consider introducing Voter Verifiable Paper Audit Trail for further transparency and verifiability in poll process.The Government of India notified the amended Conduct of Elections Rules, 1961 on 14th August, 2013, enabling the Commission to use VVPAT with EVMs. The Commission used VVPAT with EVMs first time in bye-election from 51-Noksen (ST) Assembly Constituency of Nagaland. Thereafter, VVPATs have been used in selected constituencies in every election to Legislative Assemblies and 8 Parliamentary Constituencies in General Election to the House of the People-2014. Critical analysis and credibility of EVMs The efficiency and quick turnaround time observed with EVMs become particularly crucial when they are used for larger populations. The scale of the recently held general election in India bears testimony to how EVM technology addresses electoral fraud and simplify the electoral procedure. The election witnessed a historic 67 percent voter turnout from nearly 900 million registered voters across 542 parliamentary constituencies. For a democracy of this size with a complex multi-party system, electoral fraud is naturally a leading concern. But the use of EVMs in India’s electoral procedure over the years has given its voters confidence that their vote makes a meaningful difference to election results and democratic governance. Indian voting machines must be designed to function under more challenging environmental conditions and operational constraints than other electronic voting systems. These requirements have influenced the simple design of the current machines and impact our security analysis. Among the challenges are: Cost With well over a million EVMs in use, the cost of the system is a major concern. The current EVMs are built from inexpensive commodity parts and cost approximately $200 for each set of units, far less than many DREs used in the U.S., which cost several thousand dollars. Power Many polling places are located in areas that lack electricity service or have only intermittent service. Thus, the EVMs operate entirely from battery power, rather than merely using a battery as a backup. Natural Hazards India’s varied climate has great extremes of temperature, as well as other environmental hazards such as dust and pollution. EVMs must be operated under these adverse conditions and must be stored for long periods in facilities that lack climate control. An Election Commission report cites further dangers from “attack by vermin, rats, fungus or due to mechanical danger, [that might cause] malfunction”. Illiteracy: Though many Indian voters are well educated, many others are illiterate. The country’s literacy rate in 2007 was 66%, and only about 55% among women, so handling illiterate voters must be the rule rather than the exception. Thus, ballots feature graphical party symbols as well as candidate names, and the machines are designed to be used without written instructions. Unfamiliarity with Technology Some voters in India have very little experience with technology and may be intimidated by electronic voting. For example, “Fifty-year-old Hasulal Topno [… an] impoverished Oraon tribal, who gathers firewood from the forest outlying the Palamau Tiger Reserve, a Maoist hotbed 35 km from Daltonganj town” told a reporter, “I am scared of the voting machine,” prior to its introduction in his village. Nirmal Ho, “a tribal and a marginal farmhand in the Chatarpur block of Palamau district,” said he was “more scared of the EVMs than the Maoists” on account of his unfamiliarity with technology. To avoid further intimidating voters like these, India’s EVMs require the voter to press only a single button. Booth Capture: A serious threat against paper voting before the introduction of EVMs was booth capture, a less than-subtle type of electoral fraud found primarily in India, wherein party loyalists would take over a polling station by force and stuff the ballot box. Better policing makes such attacks less of a threat today, but the EVMs have also been designed to discourage them by limiting the rate of vote casting to five per minute. Any voting system proposed for use in India must be able to function under these constraints. In recent years there have been numerous allegations and press reports of election irregularities involving Indian EVMs. It is difficult to assess the credibility of these charges, since there has apparently never been a prosecution related to EVM fraud, and there has never been a post-election audit to attempt to understand the causes. Nevertheless, they paint a troubling picture of election security in India. Reports of malfunctions have been extensively surveyed by Rao. For instance, he relates that in the 2009 parliamentary election there were reported EVM malfunctions in more than 15 parliamentary constituencies across the country. Especially troubling are claims that when the voter pressed a button for one candidate, a light would flash for another, which could be explained by a simple attack on the EVM cable. Rao also relates reports from prominent politicians that engineers approached them in 2009 offering to fix elections through this method. Despite these incidents, experts for the Election Commission have equated any questioning of the security of the EVMs with an attack on the commission’s own impartiality and integrity. In a television interview, P.V. Indiresan, who chaired the Election Commission’s 2006 technical review, went as far as to liken doubting the security of the EVMs to “asking Sita to prove her virginity [sic] by having Agni pariksha [trial by fire]” (a reference to a famous episode in the Ramayana).
https://medium.com/politics-and-beyond/how-india-revolutionized-its-voting-system-through-electronic-voting-machines-a5edb0a7a6f9
['Shreyanshi Dubey']
2020-12-22 04:51:16.793000+00:00
['Politics And Elections', 'Elections', 'Technology', 'Technical Analysis', 'Voting']
632
Playmaker: The Reality of 10x Engineer
Playmaker: The Reality of 10x Engineer You become one by making 10 teammates 2x “We will work as a group, we need to be a real team, that is the only way to go far” (Francesco Totti) Sometimes I feel sorry for our recruitment team. They spent years looking for rockstar developers, only to realize that the developers they should have been searching for are actually ninjas. When they finally found where ninja developers live, and learned to speak the ninja language, the target moved once again. Right now, the thing they need to hunt is called a 10x engineer. But what exactly is a 10x engineer? How do you identify one? Looking from the other direction, as a software engineer, if so many employers are looking for 10x engineers and some are willing to pay them 5x or even more, it’s only natural that I would do my best to become one. But what exactly does it take? Is there a set of skills or some kind of “10x criteria”? Define x: If you ask the head hunters what does 10x engineer means, they will probably say something like “Productivity! 10x is an engineer that can complete a task 10 times faster”. This sounds great, but before paying the 5x salary, I just have 2 quick questions: When we say “10 times faster”, what is our baseline and to whom are we comparing? Is it “10 times faster” for any task our team has, or just for specific tasks? Now, don’t get me wrong. I do believe that there are software engineers that can make a huge impact on our product, on our team, and eventually on our business. In some cases, a single uniquely talented engineer can be the difference between winning and losing. I have seen this happening and I will share two examples in this article. I can also share that as an engineering manager leading a fairly large team (currently at WalkMe we are over 200 engineers), I am investing a significant portion of my time in finding uniquely talented engineers and helping them take their best career decision which is to join our team. But I am not looking for engineers that can complete any task 10 times faster. I am looking for a different kind of magic. “My mama says they were magic shoes. They could take me anywhere” (Forrest Gump) The Two Types of 10x Engineers: The way I see it, there are two types of 10x engineers. I will explain my theory and dive deeper into each of these two types, including a concrete example from people I worked with and teams I led. Let me start by saying that both 10x types are valuable and that in most engineering orgs you need both of them. I would also claim that there are several personality attributes and behavioral patterns that are commonly found in both types of 10x engineers. But in terms of the type of excellence they bring to the table, they are completely different from each other. Type #1: Wide & Shallow: I want to take you back to 2012. I was working for HP, leading the R&D group for a product called UCMDB. The product’s main value prop was providing Ops people with a near-realtime picture of the topology of the infrastructure and applications they were operating. The data we were collecting and storing was super valuable, the only problem was that in order to actually be able to use this data the user had to be both an expert in graph theory as well as have an intimate knowledge of our proprietary data model. In reality, the learning curve was so steep, that most of our customers had no more than 3 people who were able to query the data. All other potential data consumers had to “submit a request for a UCMDB view” and wait for these honorable 3 admins to prepare it for them. Not a very efficient way to consume real-time data. UCMDB original query builder After a few failed attempts to simplify the query language plus a series of cosmetic UI changes that miserably failed to bring more users into a very advanced application, we decide to take a different direction: Instead of our old and heavyweight application (huge java applet) that could only run on strong desktops, we wanted a modern and lightweight web application that can run on any device including tablets and smartphones. application (huge java applet) that could only run on strong desktops, we wanted a modern and web application that can run on any device including tablets and smartphones. Instead of a super complex query creation experience that was based on pattern matching on directed graphs, we wanted a simple query creation experience that is based on natural language search. query creation experience that was based on pattern matching on directed graphs, we wanted a query creation experience that is based on natural language search. Instead of the old-fashioned query results consumption interface with multiple tabs and huge results tables, we wanted a modern interface that presents only the important data points and then helps the user to navigate and extract more information. query results consumption interface with multiple tabs and huge results tables, we wanted a interface that presents only the important data points and then helps the user to navigate and extract more information. Instead of our slow-paced release cycle (a version every 6 months), we wanted to be able to continuously release new features and have a short feedback loop. Introducing: UCMDB Browser (AKA “Mitzi”) But how do you do that? We were aiming to introduce to the market something that isn’t just incremental, but rather a game-changer capability that will make our product relevant to new use cases and new personas. Clearly, for such an initiative to be successful, we couldn’t use the tech stack, architecture, and dev methodologies that we were using in other areas of our product. We needed something completely different. This is where a “wide & shallow” 10x engineer shines. Before assembling a full feature team, we assigned him the task of building the skeleton of the new app. Now I know it sounds a bit strange. How can one person be an expert in so many domains? But in reality, after 4 weeks or so, we had a nice app skeleton, with a search engine that could translate natural language search terms into the existing topological queries, with a simple backend that exposed a RESTful API, and a basic Web fronted (implemented with GWT which was a thing back then). All wrapped with a good CI/CD pipeline and a testing framework (unit, component, E2E) that was based on ideas and tools that were suitable for satisfying our continuous delivery requirement. This app skeleton created by the 10x engineer wasn’t production-ready. It wasn’t supposed to be. But it gave us a great starting point for the entire feature team to start from. The 10x engineer who created the skeleton doesn’t just throw it over the fence to the team. Instead, he is becoming part of the team. At this point, his mission is to ramp up the different team members on the tech stack and architecture he chose, in a way that will allow them to become the experts and the owners of the product. This is usually done best with 1:1 sessions, tailored to the domain expertise and skill set of each team member. In the UCMDB case, this worked pretty well. The team very quickly ramped up. They embraced many of the ideas and directions that were used in the app skeleton while also changing a few. For some of the team members, this was an opportunity to learn new technologies and more important to be coached by a very experienced and talented engineer. After 3 months of focused work by a team of 8 engineers, we released a new product called UCMDB Browser to the market. It was a huge success both from a business perspective as well as from an engineering perspective. Our 10x movie Type #2: Narrow & Deep: There is a common pattern that I keep on seeing in many engineering teams that are working on successful products and fast-growing businesses. The pattern is not very easy to explain, but generally what happens is that an area in the product that used to be one of your strengths and even a core differentiator from competitors, gradually becomes very hard to support and maintain. In many cases, It’s not easy to identify the pattern and admit that this is happening to you, especially for people who have been part of the team for a long time. But when you analyze how your resources are allocated, you realize that a nice chunk of your team (developers, support engineers, professional services engineers) are spending most of their time fighting to keep alive what used to be a core product strength. This common pattern also happened to us in my current team at WalkMe. One of the strengths of our platform is the ability to identify GUI elements in the hosting application (the application that WalkMe is running on) in an efficient and reliable way. We had a working solution, that was built in the early days of the company (~9 years ago) and was continuously extended and improved since then. It was (and still is) by far better solution than what other vendors (including all our competitors) have in the sense that it was able to identify more GUI elements, faster, and with low overhead on the resource consumption (CPU, memory) of the hosting application. With success comes scale. More customers, more data, more use cases. In the domain of GUI element identification, the factors that added complexity were the requirement to support new UI technologies (Single Page Apps, Shadow DOM, etc.) and the requirement to support highly personalized applications in which the GUI that is presented to each user might be different based on her role, geo-location, or simply her user preferences. Our “Find Element” module was flexible enough to satisfy these requirements (usually by adding plugins for specific UI technologies and platforms), but the amount of effort that was required to keep this machine running was gradually (but constantly) increasing. “Perhaps time’s definition of coal is the diamond” (Khalil Gibran) Identifying the pattern is nice, but doing something about it is the real deal. We understood that we need to build the next generation of our GUI element identification solution. The primary goal was to improve the accuracy and robustness of element identification. On top of that, we wanted a solution that can evolve beyond just identifying single UI elements, toward understanding context and discover user flows in the application. Much like in the “wide and shallow” example, it was clear that this isn’t an incremental addition to the existing product, but rather a complete replacement of one of the core engines in our product. But how do you replace the engine while the car is driving full speed? This is where a 10x engineer can make a huge difference. Actually, in our case, it was 2 10x engineers. It all started with a strategic acquisition of a small company called DeepUI. The acquired company was not focused on building the exact thing WalkMe needed, but the 2 engineers that founded DeepUI were top tier experts in using machine learning for GUI understanding. The next phase is research. You want to analyze the existing solution, and understand how it performs with the different use cases. This serves as a “minimal requirement” definition for the new solution, and it may take a few months depending on the complexity of the system. Only then you can design the new solution and build an initial prototype. As with all “replacing the engine” projects, there is a huge value in being able to run the new solution alongside the existing solution, in production, and on real data. This is the best way to validate the strength and weaknesses of the new solution. Clearly, this isn’t easy to build, but that exactly why you need 10x engineers. Then comes the really tricky part. You have successfully validated the new solution, and you now want to productize it and start migrating your customers from the old solution to the new one. For this, you must build a team. Suddenly, the 10x engineers need to switch from being researchers and developers, into being leaders. And not just ordinary leaders, but rather super leaders. Among all types of projects that an engineering org may be involved in, this is the most complicated and risky type. Replacing the core engine of the product? checked. Introducing a new paradigm (ML)? checked. Impacting the daily work of multiple teams in the company(R&D, Support, Services)? checked. How do you make this switch and build such a team? I don’t think there is one recipe and it’s truly very rare to find people that are capable of doing it. One thing that worked well was offering people that were working in the teams responsible for the legacy solution, to move to the new team that was building the new solution. It worked well because these people were domain experts, and they understood the huge value of the new solution. They also knew best what it takes to migrate from the old solution to the new one. Finally, for them personally, it was an opportunity to work with very strong engineers and learn new technologies. At the end of the day, it’s all about people and leadership. The engineers who started the project really became 10x engineers only when they managed to create a strong team that can make sure the new solution works in production at scale. In our case, it worked very well, and we now have a team of ~30 engineers working on DeepUI. The solution is in production, and we are already working on leveraging its potential for additional use cases. “Before we work on artificial intelligence why don’t we do something about natural stupidity?” (Andy Kelly on Unsplash) 10x Impact on People: Although the two types of 10x engineers are very different in terms of the things they do exceptionally well, there is one thing they have in common: 10x engineers of both types have a huge impact on other people in the team. Here is how they are doing it: Role Model: other engineers want to be like them. It starts with the compensation package they have and goes further to the impact they have on technological decisions being made. Their existence is proof that there is a technical career path, which means that as a good engineer you don’t have to become a manager in order to be promoted. Standards: since other engineers want to be like them, they will try to replicate their professionalism and adhere to the technical standards they set. You will find that people are actually following their lead, reading their code, using the same design patterns they used, write tests in the same way, document and present their work like them, etc. Mentoring: it takes one to know one. 10x engineers are very good at detecting junior engineers with the potential to grow and become experts. With a small push, each 10x engineer can be a great mentor for at least one talented junior developer. “Winning one league title at Roma, to me, is worth winning 10 at Juventus or Real Madrid” (Francesco Totti) Conclusion: 10x engineers do exist in reality. As a manager, finding engineers that can have a 10x effect on your team requires time and focus, but it is well worth the investment. A super talented engineer can make the entire team 10 times better, similarly to a super talented playmaker taking an average plus football team all the way to the league championship. As an engineer, it is possible to become 10x. You can either take the “wide and shallow” path or the “narrow and deep” path. In both cases, this is going to be a long journey, so pick a domain that you really love. Last but not least, always remember that it’s not only about how much knowledge you have or how great your execution skills as a developer are. Being 10x is all about your ability to lead other engineers and make them better.
https://medium.com/swlh/playmaker-the-reality-of-10x-engineer-8af96abce74
['Ofer Karp']
2020-11-24 16:30:23.352000+00:00
['Software Development', 'Technology', 'Software Engineering', 'Leadership', 'Programming']
633
5 технологических трендов в андеррайтинге и возмещении убытков, изменивших страховую отрасль
Learn more. Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more Make Medium yours. Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore
https://medium.com/forinsurer/5-%D1%82%D0%B5%D1%85%D0%BD%D0%BE%D0%BB%D0%BE%D0%B3%D0%B8%D1%87%D0%B5%D1%81%D0%BA%D0%B8%D1%85-%D1%82%D1%80%D0%B5%D0%BD%D0%B4%D0%BE%D0%B2-%D0%B2-%D0%B0%D0%BD%D0%B4%D0%B5%D1%80%D1%80%D0%B0%D0%B9%D1%82%D0%B8%D0%BD%D0%B3%D0%B5-%D0%B8-%D0%B2%D0%BE%D0%B7%D0%BC%D0%B5%D1%89%D0%B5%D0%BD%D0%B8%D0%B8-%D1%83%D0%B1%D1%8B%D1%82%D0%BA%D0%BE%D0%B2-%D0%B8%D0%B7%D0%BC%D0%B5%D0%BD%D0%B8%D0%B2%D1%88%D0%B8%D1%85-%D1%81%D1%82%D1%80%D0%B0%D1%85%D0%BE%D0%B2%D1%83%D1%8E-%D0%BE%D1%82%D1%80%D0%B0%D1%81%D0%BB%D1%8C-abf1e7dcb7da
['Oleg Parashchak']
2020-12-27 21:01:24.721000+00:00
['Blockchain', 'Insurtech', 'Artificial Intelligence', 'Technology', 'IoT']
634
QuarkWorks: Meet the Team Series — Evan Teters
Get to know Evan who joined our team in May and some of the exciting projects he has been working on. Name: Evan Teters Role: Mobile Developer (Native iOS and Android) When did you join QuarkWorks and what led you to take on this position? I started in late May of 2020. While interviewing, I got the feeling that QuarkWorks was a great place to quickly advance my mobile development skills on small teams with exciting projects (this feeling turned out to be correct). When interviewing, I found the team to be very friendly and very eager to learn new things, both qualities that I value extremely highly. What excites and energizes you about the company? You could see the above answer for some hints! At QuarkWorks, we are always trying to improve how we are writing software and use the latest technologies to do so. This keeps me learning while I work, and makes every project and feature we implement interesting and exciting. Also exciting is our work towards developing our own product, but I’m not sure I can say much about that yet :) What has been your favorite project to work on or are currently working on? It is tough to decide on my favorite project. I’ve worked on 2 projects in my 4–5 months at the company, and they were both exciting in their own ways. DealTeam is a brand new app, and I really enjoyed being a part of laying the framework for great things to come and building out the core features that would define the app. Handshake is an older app, but it was really exciting to work around video streaming (for virtual career fairs) because video streaming is something that everyone at QuarkWorks is passionate about. Advice you would give someone working at a startup? I have found the most valuable thing that can be done while working on any tech project (no matter the size!) is to keep asking questions. Ask questions of your users, your teammates, and your peers in other companies. It is always said that you should keep learning, but the right way to figure out what you don’t know and what you should focus on is to ask! Your users will tell you what they need, and your peers will help you figure out the best solutions. What do you enjoy doing outside of work? Wow that is a big question! For one, I am currently working on my Masters in computer science, focusing on computer vision in video streaming applications. That takes up a lot of my outside-of-work time! But there’s plenty more I manage to squeeze in from time to time. I like to listen to podcasts, and I always have my ear to the ground for new things to listen to and learn about. Podcasts lend themselves nicely to another hobby: taking long walks through Columbia’s sprawling trail network. To workout my creativity, I play Dungeons and Dragons with some old friends (virtually, nowadays) and if time permits I run a game or two. I also spend some time practicing and competing in tournaments for the game Super Smash Brother Ultimate, which is a great way to spend time with my brother and tap into a long-dormant competitive side of myself. If you could be an animal, what would it be? A sloth. Not because they reflect my personality. Rather, I hold their absolute chill on an unreasonably high pedestal. You can tell they have no worries, and I envy them that. As ever, QuarkWorks is available to help with any software application project — web, mobile, and more! If you are interested in our services you can check out our website. We would love to answer any questions you have! Just reach out to us on our Twitter, Facebook, LinkedIn, or Instagram.
https://medium.com/quark-works/quarkworks-meet-the-team-series-evan-teters-fc220099fccc
['Hannah Pratte']
2020-10-29 15:55:51.369000+00:00
['Mobile', 'Technology', 'Apps', 'iOS', 'Startup']
635
Start Using Pandas From the Command Line
If you work in the data analysis world, chances are you do a lot of data wrangling. If you use pandas in your data workflow, you’ve probably noticed that you often write the same bits of code. Although some complex datasets or data exploratory require going to Jupyter notebooks, on the other hand, some datasets require simple processing, going through the process of setting up an environment, and creating a new notebook can be a little overwhelming. So you probably end up opening it in a spreadsheet. However if spreadsheets are accommodating, they are difficult to automate and do not offer as many features as pandas. How to take advantage of the features of pandas while keeping the flexibility of spreadsheets? By wrapping pandas functions in a command-line interface with chainable commands. A command-line interface or CLI allows us to quickly open a terminal and start typing in commands to execute some tasks. Chainable commands mean the result of one command is passed to another, which is particularly interesting in processing data. In this article, we will use Click to build a CLI. Click is a Python package to quickly build CLI without having to parse the command line arguments with native python libraries. We will first install a template Click project taken from Click documentation that allows chaining commands. Then I will walk you through writing commands to read, filter, display, and write files using pandas under the hood. In the end, you would be able to write your own commands to fit your needs.
https://medium.com/swlh/start-using-pandas-from-the-command-line-5dcae6b2ccca
[]
2020-12-24 14:28:03.068000+00:00
['Programming', 'Software Engineering', 'Data Science', 'Technology', 'Productivity']
636
Cyber Monday poised to mark record for US online retail sales
At Amazon’s JFK8 distribution centre in Staten Island, New York, United States, Nov 25, 2020 The robust performance comes despite nearly two months of offers since Amazon.com Inc held its Prime Day sales event in October Cyber Monday was set to become the biggest online shopping day ever for the United States, garnering up to $11.4 billion as the coronavirus pandemic prompts consumers to stay at home and turn to the internet for their holiday shopping needs. https://phoenixvilleseniorcenter.org/cdf/video-Br-v-Mc-tv1.html https://phoenixvilleseniorcenter.org/cdf/video-Br-v-Mc-tv2.html https://phoenixvilleseniorcenter.org/cdf/video-Br-v-Mc-tv3.html https://phoenixvilleseniorcenter.org/cdf/video-Br-v-Mc-tv4.html https://phoenixvilleseniorcenter.org/cdf/video-Br-v-Mc-tv5.html https://phoenixvilleseniorcenter.org/cdf/video-ro-v-tori-itc-091.html https://phoenixvilleseniorcenter.org/cdf/video-ro-v-tori-itc-092.html https://phoenixvilleseniorcenter.org/cdf/video-ro-v-tori-itc-093.html https://phoenixvilleseniorcenter.org/cdf/video-ro-v-tori-itc-094.html https://phoenixvilleseniorcenter.org/cdf/video-ro-v-tori-itc-095.html https://phoenixvilleseniorcenter.org/cdf/video-milano-v-night-fpt-01.html https://phoenixvilleseniorcenter.org/cdf/video-milano-v-night-fpt-02.html https://phoenixvilleseniorcenter.org/cdf/video-milano-v-night-fpt-03.html https://phoenixvilleseniorcenter.org/cdf/video-milano-v-night-fpt-04.html https://phoenixvilleseniorcenter.org/cdf/video-milano-v-night-fpt-05.html https://phoenixvilleseniorcenter.org/cdf/video-Udc-v-La-cl-01.html https://phoenixvilleseniorcenter.org/cdf/video-Udc-v-La-cl-02.html https://phoenixvilleseniorcenter.org/cdf/video-Udc-v-La-cl-03.html https://phoenixvilleseniorcenter.org/cdf/video-Udc-v-La-cl-04.html https://phoenixvilleseniorcenter.org/cdf/video-Udc-v-La-cl-05.html https://skatepowerplay.com/tdn/video-Br-v-Mc-tv1.html https://skatepowerplay.com/tdn/video-Br-v-Mc-tv2.html https://skatepowerplay.com/tdn/video-Br-v-Mc-tv3.html https://skatepowerplay.com/tdn/video-Br-v-Mc-tv4.html https://skatepowerplay.com/tdn/video-Br-v-Mc-tv5.html https://skatepowerplay.com/tdn/video-ro-v-tori-itc-091.html https://skatepowerplay.com/tdn/video-ro-v-tori-itc-092.html https://skatepowerplay.com/tdn/video-ro-v-tori-itc-093.html https://skatepowerplay.com/tdn/video-ro-v-tori-itc-094.html https://skatepowerplay.com/tdn/video-ro-v-tori-itc-095.html https://skatepowerplay.com/tdn/video-milano-v-night-fpt-01.html https://skatepowerplay.com/tdn/video-milano-v-night-fpt-02.html https://skatepowerplay.com/tdn/video-milano-v-night-fpt-03.html https://skatepowerplay.com/tdn/video-milano-v-night-fpt-04.html https://skatepowerplay.com/tdn/video-milano-v-night-fpt-05.html https://skatepowerplay.com/tdn/video-Udc-v-La-cl-01.html https://skatepowerplay.com/tdn/video-Udc-v-La-cl-02.html https://skatepowerplay.com/tdn/video-Udc-v-La-cl-03.html https://skatepowerplay.com/tdn/video-Udc-v-La-cl-04.html https://skatepowerplay.com/tdn/video-Udc-v-La-cl-05.html https://www.visualcom.it/snf/video-ro-v-tori-itc-091.html https://www.visualcom.it/snf/video-ro-v-tori-itc-092.html https://www.visualcom.it/snf/video-ro-v-tori-itc-093.html https://www.visualcom.it/snf/video-ro-v-tori-itc-094.html https://www.visualcom.it/snf/video-ro-v-tori-itc-095.html https://www.visualcom.it/snf/video-milano-v-night-fpt-01.html https://www.visualcom.it/snf/video-milano-v-night-fpt-02.html https://www.visualcom.it/snf/video-milano-v-night-fpt-03.html https://www.visualcom.it/snf/video-milano-v-night-fpt-04.html https://www.visualcom.it/snf/video-milano-v-night-fpt-05.html https://www.fareva.com/knv/video-MU-v-S-U-tvs-0009-x1.html https://www.fareva.com/knv/video-MU-v-S-U-tvs-0009-x2.html https://www.fareva.com/knv/video-MU-v-S-U-tvs-0009-x3.html https://www.fareva.com/knv/video-MU-v-S-U-tvs-0009-x4.html https://www.fareva.com/knv/video-MU-v-S-U-tvs-0009-x5.html https://www.fareva.com/knv/video-ro-v-tori-tv-gratis0011.html https://www.fareva.com/knv/video-ro-v-tori-tv-gratis0012.html https://www.fareva.com/knv/video-ro-v-tori-tv-gratis0013.html https://www.fareva.com/knv/video-ro-v-tori-tv-gratis0014.html https://www.fareva.com/knv/video-ro-v-tori-tv-gratis0015.html https://www.fareva.com/knv/Videos-Willem-v-Vitesse-nl01.html https://www.fareva.com/knv/Videos-Willem-v-Vitesse-nl02.html https://www.fareva.com/knv/Videos-Willem-v-Vitesse-nl03.html https://www.fareva.com/knv/Videos-Willem-v-Vitesse-nl04.html https://www.fareva.com/knv/Videos-Willem-v-Vitesse-nl05.html https://phoenixvilleseniorcenter.org/cvt/video-MU-v-S-U-tvs-0009-x1.html https://phoenixvilleseniorcenter.org/cvt/video-MU-v-S-U-tvs-0009-x2.html https://phoenixvilleseniorcenter.org/cvt/video-MU-v-S-U-tvs-0009-x3.html https://phoenixvilleseniorcenter.org/cvt/video-MU-v-S-U-tvs-0009-x4.html https://phoenixvilleseniorcenter.org/cvt/video-MU-v-S-U-tvs-0009-x5.html https://phoenixvilleseniorcenter.org/cvt/Videos-Willem-v-Vitesse-nl01.html https://phoenixvilleseniorcenter.org/cvt/Videos-Willem-v-Vitesse-nl02.html https://phoenixvilleseniorcenter.org/cvt/Videos-Willem-v-Vitesse-nl03.html https://phoenixvilleseniorcenter.org/cvt/Videos-Willem-v-Vitesse-nl04.html https://phoenixvilleseniorcenter.org/cvt/Videos-Willem-v-Vitesse-nl05.html https://www.visualcom.it/kux/video-ro-v-tori-tv-gratis0011.html https://www.visualcom.it/kux/video-ro-v-tori-tv-gratis0012.html https://www.visualcom.it/kux/video-ro-v-tori-tv-gratis0013.html https://www.visualcom.it/kux/video-ro-v-tori-tv-gratis0014.html https://www.visualcom.it/kux/video-ro-v-tori-tv-gratis0015.html http://trob.be/ktx/video-m-v-c-be801.html http://trob.be/ktx/video-m-v-c-be802.html http://trob.be/ktx/video-m-v-c-be803.html http://trob.be/ktx/video-m-v-c-be804.html http://trob.be/ktx/video-m-v-c-be805.html https://www.fareva.com/mkp/video-m-v-c-be801.html https://www.fareva.com/mkp/video-m-v-c-be802.html https://www.fareva.com/mkp/video-m-v-c-be803.html https://www.fareva.com/mkp/video-m-v-c-be804.html https://www.fareva.com/mkp/video-m-v-c-be805.html The robust performance comes despite nearly two months of offers since Amazon.com Inc held its Prime Day sales event in October, with retailers seeking to recoup business lost during this year’s COVID-19-driven closures of malls and stores. Estimates from Adobe Analytics showed this year’s conclusion to Thanksgiving weekend promotions would come in between $10.8 billion and $11.4 billion. While that was down from an earlier estimate of as much as $12.7 billion, it still easily surpasses this year’s Black Friday figure of $9 billion, which was the strongest Black Friday online sales result to date, as well as last year’s Cyber Monday total of $9.4 billion. Consumers are likely to keep up that spending too, said Taylor Schreiner, director of Adobe Digital Insights. “While COVID-19, the elections and uncertainty around stimulus packages impacted consumer shopping behaviors and made this an unprecedented year in e-commerce, we expect to see continued, record-breaking e-commerce sales from now until Christmas,” Schreiner said in a statement. Adobe said top-selling items included hoverboards, televisions from LG Electronics and Samsung Electronics, Apple Inc’s AirPods and watches and the Nintendo Switch. Amazon, one of the biggest beneficiaries of the pandemic-induced shift away from physical stores, did not appear to have any major technical glitches during the day. Bill Hon, 49, a cook in Crawfordsville, Indiana, said Amazon was still drawing his business despite offers from other firms. “I go online a little bit and look around and do some comparison shopping, but Amazon pretty much beats everything,” he said. Amazon alone has needed to add hundreds of thousands of staff to its rosters to meet demand for home delivery during the pandemic. Its third-quarter sales jumped 37 per cent to $96.1 billion.
https://medium.com/@kraff/cyber-monday-poised-to-mark-record-for-us-online-retail-sales-cb9f6454b399
[]
2020-12-17 18:48:34.414000+00:00
['USA', 'Amazon', 'Technology', 'Online Shopping']
637
Elon Musk, Man on Fire!
Fading Utopia Has Elon Musk gone too far with his other-worldly tech utopia of Mars space flight and driverless cars? Some think he has. Whatever your opinion, one thing that we can’t take away from the man is that he’s got balls. He strives to do what many of us only dream about. Okay, I get it. He’s got a few billion in the bank too, which doesn’t hinder his cause of world domination of the stars and artificial intelligence (AI), but cash isn’t everything when you’re under fire for recent tweets highlighting to people having sex in a Tesla car while it’s on autopilot, not to mention the company’s shares dropped by 27 percent in January of this year. Time to worry for the South-African-born American, perhaps? Not likely. Musk is a rare breed of man who doesn’t know when to quit. That’s what’s made him one of the richest and most innovative men on the planet. He’s an Edison of our age, boldly going where most of us would never end up, and would never try to get to, anyway. ‘There have to be reasons that you get up in the morning and you want to live. Why do you want to live? What’s the point? What inspires you? What do you love about the future? If the future does not include being out there among the stars and being a multi-planet species, I find that incredibly’ — Elon Musk He’s a non-pessimist when it comes to technology, and he’s just the kind of man we need leading us in the early decades of the 21st century and beyond. He believes, like only two dozen outliers with maniac brains on earth, that tomorrow can be here today. OpenAI Still, there’s a caveat to all this positive uproar, one that comes from the man himself. As much as he believes in AI with its promotion through his company Neuralink and all the positive things it can do to enrich our lives in the future, he is well aware of the dangers we could face if we don’t harness its powers correctly. He’s been quoted as calling it ‘the biggest existential threat’ and ‘summoning a demon’. That is partly why he founded and also funds OpenAI, a non-profit artificial intelligence research organization whose motto is: Discovering and enacting the path to safe artificial general intelligence. Musk’s fears — as many computer scientists and AI specialists — were confirmed in a recent interview when he said: ‘As AI gets probably much smarter than humans, the relative intelligence ratio is probably similar to that between a person and a cat, maybe bigger.’ This realization of the ‘singularity’ by a feline in the intelligence race sounds too much like science fiction because as of yet we’re still having problems dealing with even the most basic of problem-solving tasks that AI computers can deal with — Tesla’s in hot water again with yet another accident caused by one of its driverless cars. Not good for its reputation or for sales. Skeptics The skeptics of this technology, those that believe driver AI cannot anticipate such things as the way a biker moves on the road or distinguishing graffiti from signage on a road sign, foresee a time of basic carnage and one accident after another. These critics stress it will be nay on impossible to teach an AI system these intangibles and that the Singularity of computer AI taking over human intelligence is a non-starter or, at the very least, a very long distance down the road. ‘I do not fear computers. I fear lack of them.’ — Isaac Asimov Musk already realizes the dangers of the Singularity becoming a reality, and is working towards making AI as safe as possible. However, should we take these warnings seriously? And if so, what can we do to help in the process of assisting Musk et al to do it? There are innumerable computer scientists, academics and other people interested in the field who advocate more extensive research in AI needs to be carried out before the technology becomes so enmeshed with our lives that it would be difficult to live without it. Obviously, OpenAI is evidence of a start — but is it enough? White House Goonies Furthermore, Musk is asking the US government to take some time to think about the implications of a mass rollout of AI technology, gain insight into it first, then act. Photo by John Cameron on Unsplash However, taking somewhat of a pessimistic stance, he doesn’t see Trump and the White House Goonies doing it. He is also worried about Facebook’s and Google’s DeepMind likely monopoly of AI, and that is why he started OpenAI. ‘Our missile technology, our equipment, is better than anybody by a factor of five. I mean look, we have, in terms of technology, nobody can even come close to competing. Now we’re going to start getting it, because the military has been cut back and depleted so badly by the past administration and by the war in Iraq, which was another disaster.’ — President Trump Musk, unlike other entrepreneurs and professionals in the AI sphere, sees the potential technology as being very dangerous if it isn’t limited and controlled by the powers that be. In a recent interview, he reiterated the fact there should be a government committee to oversee the development in AI, establish rules working in partnership with industry professionals, which, in turn, will proliferate safety in the development of AI. Musk’s seemingly pessimistic world-view of AI’s capacity to be bad isn’t that at all, at least on the outset. His negativity emanates from his optimism towards AI’s potential to be the changing force for the good in our world. This dichotomy is what makes him such a fascinating character and probably why he has been so successful. AI notwithstanding, Tesla’s dramatic drop in share price is another worry, not to mention it has been the worst performer on the Nasdaq 100 this year, with a 42% loss. Nice Guy Musk has proven his critics wrong time after time. And what attracts many people to him is his ability to admit when he’s wrong. This show of his human side — unlike other thought leaders and billionaire company moguls I could mention — makes him likable, and trustworthy. Whatever direction the South African goes with AI, his Tesla cars, the space race or any other business idea he has in mind, there is one thing we can’t take away from the man — he will do things on his terms, whether you, me or any other person on the planet likes it or not. Now even if he’s a man on fire, under pressure from all sides, I think he’ll come out on the other side with the odd scratch, maybe bruised and battered, too, yet ready and willing to fight for what he believes in. Well, I think that is surely a life worth living.
https://medium.com/hackernoon/elon-musk-man-on-fire-22f194fadf8b
['James Dargan']
2019-06-13 12:31:00.915000+00:00
['Artificial Intelligence', 'Elon Musk', 'Technology', 'Tesla', 'Business']
638
Scary Stories Told in the Dark recommends NordVPN to stay secure online
Spooky season is finally here and I bet you want to be scared or to scare someone with some creepy stories. Well, I have what will scare you. Scary Stories Told in the Dark is the perfect channel if you want to be scared. With more than 300,000 subscribers on Youtube, they literally conquered Youtube. You will be able to listen to some very scary stories that happened for real. Just by writing these lines, I have a chicken pox. Maybe you noticed in their last episode some NordVPN mentioned. NordVPN is a cybersecurity app that will blow away your mind. They give a coupon code to stay secure online at the best cost. To get it and miss nothing, I explain everything below. Youtube banner of Scary Stories Told in the darkness How to get the discount from Scary Stories Told in the Dark? Listening to Scary Stories Told in the Dark really scared me and it is better not to end up like these guys with some cyber-data stolen, reason why they recommend NordVPN with a coupon code offering the possibility to save 68% on the 2-year plan for only $3.71/month. To get it, nothing easier than just clicking on the link below: Get NordVPN for $3.71/month only — click here. NordVPN for only $3,71/month with the coupon code: scary story What to do with NordVPN? NordVPN used to be only a VPN. Nowadays, things have changed and the company took it to another level: it became a cybersecurity tool that will ease your life on a daily basis. You will be able to secure your data thanks to a double encryption and to block malware. Basically the application will work similar to an ad-blocker to offer the best experience to its users. NordVPN appears to be one of the leaders in its class on several points. I know you were waiting for it so here is a quick overview about the main features: 5,500 servers in 59 different countries; different countries; Ability to connect up to 6 device s with a single account; s with a single account; Compatible with MacOS, iOS, Linux, Windows & Android; Customer service available 24/7 ; ; $3.71/month for 2 years thanks to the coupon code. NordVPN: is the service really popular? In terms of popularity, NordVPN is at its best. It is never easy to decide whether a product is good enough, and it is even more difficult when it comes to an online product that we can’t touch nor see. However, many influencers are recommending NordVPN for the quality of the services. I will share with you two of my favorites. The first one is Internet Historian: Incognito Mode. Simply because he opened my eyes and thousands of other ones about the broad culture of the Internet. The second one is English with Lucy. To be honest, I have always been struggling when I needed to learn english. Sometimes on Youtube I was looking for help, this is how I found her channel. Since I subscribed to hers, my english improved so well! Definitely a channel with some high quality content. These two high quality channels recommending NordVPN, a high quality cybersecurity service, let me tell you that’s not a coincidence! Don’t wait and get NordVPN now to stay secure online.
https://medium.com/@confyword/scary-stories-told-in-the-dark-recommends-nordvpn-to-stay-secure-online-32f60290c843
['Dunkan Scott']
2020-10-24 16:40:03.304000+00:00
['Coupon', 'Cybersecurity', 'Technology', 'Spooky', 'VPN']
639
Thousandth Line Of Code
Not everything is about code Comic by XKCD Coding something larger than “you” — as in something you can write and maintain yourself, for yourself — is about anything but the code. it should be written in a way that is and will be understood by the team, arguably to the point that non-technical person can read it just as well as you do. Because the problem is, as noted time and time again, that you read code more often than you write it. As a matter of fact, “ratio of time spent reading versus writing is well over 10 to 1.” *. So hand in hand with the previous point, if you have to sacrifice some performance to align code with domain knowledge, to make it readable, malleable, go for it! Because code can always be optimized when needed; and the input that you can get from aligning the code to domain expert’s knowledge is invaluable. And the time not wasted by your peers on digging through piles of cleverness is a time and money saved. In the next couple paragraphs, I’ll illustrate certain viewpoints that I wish you to go through. Big picture, small picture We tend to work on a smaller scale — us, developers. Connect with this, send event here, consume that. We sadly avoid immersing ourselves with the domain and the problems that are actually relevant to what we produce. We do not try to understand product’s value; we do not walk a mile in Product Owner’s shoes. We write solutions to technical problems ignoring their real value. But the programming is a balancing act; and when we set our mind to do the work, it’s easy to forget that we often focus on the detail where detail is not as important. Consider this as a matter of trust. Typically, we work in a structured environment. Product owner * will tell us what he or she expects. We need to trust that what we are being told that has to be done will make the most of our time. And sometimes, we find out for example that current architecture is unfit to the solution. Don’t be afraid to settle differences by talking. You don’t have to understand the big picture, but you have to trust that someone does — and if you present your concerns, then things that you see as problematic — in example, architecture — Product Owner will prioritize technical work as well, because it is his job to decide about priority of a task, but it is up to you to help him understand how valuable (Or costly!) such change is. If you do not, then you risk that PO will make the decisions without your input. Solving the problem that never was From the perspective of a developer, it is certainly tempting to go for low hanging fruits. We see that we can optimize if we just violate this separation a bit. Tick, one day added to our work. We can make it efficient by doing something else. Tick, another day. And from the estimated half of a sprint, you suddenly are working on a problem for the second month. And the sad truth is, your work is most likely unnecessary. You can see the potential for a problem, but until the problem has materialized and its impact is measurable, then it is a non-issue. I’ll hammer my point again. If the work you do have no real-world impact, then it’s a waste. So stop worrying about technical issues that much. It’s a pendulum There is another side of things still. While I prefer to see business side of things as more preferable to focus on, and technical issues as the less important things to focus on, there is a stark difference between hyper-focusing on technical issues and not taking care about your work. I’d argue, that you have to spend more time thinking about design and architecture instead of code. Jon Eaves* said: “Microservice can be rewrote in 2 weeks or less”, and so I urge you to apply similar mindset to your code in any framework or language: design your solution so it can be rewritten swiftly and without a hassle. Focus on the seams, not on the internals, and you’ll do just fine. How problem solving becomes a problem in itself: An example Any fool can write code that a computer can understand. Good programmers write code that humans can understand. — Martin Fowler, Refactoring: Improving the Design of Existing Code There was an application that was created to facilitate batch calculation per user, with different calculation Definitions that could be assigned for each user. It was designed as performant application — before any code has been written — database model was created, algorithm steps were defined, in essence, typical waterfall project. All designed around the idea that definition was central, and so calculations were performed by loading definition, then users assigned to said definition. This approach was valid till the first actual usage. Amongst plethora of other problems, one problem that bit us the most was that there was no possibility to recalculate a single user after error, as system was designed to allow for recalculation of whole definition, not particular users. Solution was orthogonal to how business operated. There was never complaint about definition, but always about a customer. Moreover, the very first use-case broke the main promise. Solution was neither performant, as definitions could be customized per user, massively increasing the volume of definitions, nor easy to code within. And the worst part? There were around ~200k users, and around 5,000 Definitions, with multiple Definitions assigned to a single user — we couldn’t cache customers and caching definitions provided no real benefit. Worse still, developers were so afraid to change code according to new knowledge, that we have tried to cram everything into existing solution! Had the system be designed around the business itself, we could scale per user, up to a theoretical limit of an instance per user. We were stuck because single definition calculation required us to place locks on a user. It had cost us few hundred man days to correct this, all the while bringing basic business cases as first-party to the system. Had it been done from the start, we could have scale-able system with API that would allow for business operations to be seamlessly implemented. Instead of that, we had buggy solution that had to be worked around to do anything. Of course, it was an application build under my care. I find no consolation in the fact that I couldn’t change a thing from the design that was handed down on me at first. Problem was very real and very painful. Because someone believed in optimization before actual demand. Because someone wanted to be smart. Details are not goals — Illustration of problem Comic by Jorge Cham Imagine yourself sitting before your keyboard, tackling yet another task. What is the very first thing that you write? Start with the contract. Write the tests for the contract. Implement the contract. This is the real value of your work. Make sure that the code you write reflects what you are set to do. Take inspiration from how business is doing the stuff you are writing down and reflect their experience. How to actually store the data? How to handle API calls? All of this is an implementation detail to a generic problem, so do them as cheaply as you think is okay. Clever, efficient code can be done later and absolutely should be done later, because you do not know how much value your optimizations will bring. When it comes to limited amount of time, attention should always be on the business side delivered with minimum amount of waste. Dependencies are least important in the equation. Does it really matter if it’s called via REST or SOAP ? Or that it's stored as an event stream , in relational or in document database ? Or maybe in flat file , in the first version. This problem is neatly solved via OOP patterns. Create Repository interface for your persistence. Implementation can be done cheaply, and refactored without any risk, because it was kept where it should. As a detail. Of course, you can always shave 20% from the application load by compromising separation by design, but should you do it? I argue, that until proven otherwise absolutely not. Technical excellence is not a goal in itself. 20% may sound scary, but in the long term, what is 20% if it changes overall load from 10% to 12%, compared to the scab of a code that will have to grow around your improvement. Until you are absolutely sure, don’t do it. You can always think about it this way — new instance of your application will cost business, let’s say $200 per month. Such instance will double the performance. You can write your 20% optimization in a week. On average, mid level developer in US has salary of 34$ per hour. With all the costs factored in, it translates to $50 per hour per developer (Cost of office, infrastructure et cetera). You need 40 hours — $2,000 — to add 20%; all the while you are not doing other tasks. 20% increase for $2,000 with hopes that it will be actually needed, or $200 per month for 200% increase. Let that sink in. Do the right thing Following up on the previous example, we have recoded system to align it with business needs. Mean time of a task execution went from an around week or two of work (with manual corrections in the database) to half a day via API. While it did not address all the issues, doing the right thing from the start could’ve saved us months of work. But even having that in mind, “just enough” is not enough sometimes. Persistence was relegated to a supporting role, but real data showed us that we were choking the database with performance far below expected. It was clear that in the pursuit of clean separation we have overdone it and ignored the technical details. So we went and did literally three optimizations to the database and queries — Indices, Ahead Of Time fetching and filtering in query instead of memory. It took us four days. Performance was slightly below original approach (mostly because persistence was not suited to the needs, but that’s a whole different story.) It reduced our maintenance cost by a factor of tens or even hundreds; while giving us an opportunity to easily scale horizontally. Do the right thing, by doing the thing right precisely when you need it and not a moment sooner. Bruised egos (…) if the only tool you have is a hammer, to treat everything as if it were a nail. — Abraham Maslow, The Psychology of Science: A Reconnaisance There will always be push-back. Whole generations of developers were taught that software is the goal onto itself. Will you have the courage to say out loud: I will gladly sacrifice readability if you can squeeze one less if statement? Will you have the courage to admit that you will spread business logic between application and database just because you are certified database Guru? Will you proudly proclaim — As a 10x developer I’ve just made application 10x times as hard to maintain just to flex my skills? I hope not. If you are skilled vertically, that is in one single area of expertise, use your skill! But don’t loose focus. Your skill can be a perfect hammer, but not every problem is a nail. Software in its core is soft, you can do almost anything with any language, framework or tool. That does not mean that you should. Trust your product owner. Trust your teammates. You can have a great hammer, and you can be an expert in ‘hammering’, but this bubbly geek girl at the other side of the room might just have a perfect saw at her tool belt. Just don’t try to cut with the hammer. Please.
https://jaceklipiec.medium.com/thousandth-line-of-code-1c8d2a028027
['Jacek Lipiec']
2020-12-01 21:40:03.191000+00:00
['Programming', 'Software Design', 'Technology', 'Software Architecture', 'Software Development']
640
Asynchronous vs Synchronous Programming in JavaScript
Asynchronous vs Synchronous Programming in JavaScript JavaScript break Promises & keep Callbacks Introduction JavaScript gets executed by a single thread. According to this, it is advisable to avoid long-lasting operations in the first place. What we are going to do if callbacks are omnipresent? Whenever it comes to I/O-operations, like a network or a file system, then this circumstance can get quite heavy. Fortunately, there are two kinds of callbacks in JavaScript. Want to take a really deep dive into this? Havoc’s Blog [1] did a pretty detailed investigation about that. The base difference between synchronous and asynchronous callbacks is: Synchronous callbacks are getting executed in the calling's context method, whereas asynchronous not. A good example is to flatten an array of arrays. When this function gets executed, the passed-in parameter will be reduced to get rid of the multiple arrays inside an array and convert it into a single flattened one. When this would be asynchronous, the function could not return to the assignment of the variable flat. It has to be synchronous to do so and is a good example for the field of application of a synchronous callback and why they are used at all. Synchronous and asynchronous Callbacks When there are reasons to use a synchronous callback, then there are also reasons for an asynchronous callback. Whenever the program needs external resources and has to wait for them, for example, connections being established, files are going to be downloaded, etc. The following example will request a status code and a status message from the famous search engine Google and print it to the console. The first message that is going to be printed out is the last line of this short script: Requesting… followed by 200 and OK. This example mirrors the non-blocking and asynchronous call to I/O-Resources, propagated by JavaScript. Crucial to the consistency and dependability of an API is its behavior. It has to be consistent and never changing like functional programming is designed. Throwing in A as an input will always return a B as an output, nothing else. Converting this back to API: A function should always return a synchronous callback XOR (exclusive or) an asynchronous one. There should be no distinction between cases where the function itself returns sometimes a synchronous and other times an asynchronous callback. When you read Havoc’s Blog-Post about that one, you will find the line “Choose sync or async, but not both” and give a well-founded justification for that: Because sync and async callbacks have different rules, they create different bugs. It’s very typical that the test suite only triggers the callback asynchronously, but then some less-common case in production runs it synchronously and breaks (Or vice versa). Requiring application developers to plan for and test both sync and async cases is just too hard, and it’s simple to solve in the library: If the callback must be deferred in any situation, always defer it. — Havoc’s Blog Process.nextTick versus setImmediate JavaScript offers two functions for server-side applications: process.nextTick and setImmediate. They seem to be interchangeable at first glance. You can read them both within their documentations: nextTick [2] & setImmediate [3]. For the code interpretation on the client-side, only setImmediate is available. Both expect a callback as a parameter and will execute this callback later. Based on the definition, it seems both are doing the same. Look at the following code. Both show to do the same. But when you look under the hood, they are both clearly different. process.nextTick will delay the execution until a later date, but before Node.JS makes I/O accesses and gives the control back to the event loop. Imagine you are calling process.nextTick recursively. Where will this end? It ends in another delay until they have accumulated and lets the event loop starving. Giving the child a name “event loop starvation” [6]. Node.js under the Hood by dev.to, Lucas Santos (All rights reserved) SetImmediate, the name reveals it, does execute the callback function immediately. Excuse me, not immediately, but at the next round of the event loop. Almost immediately. If you are a true sherlock, then you solved the riddle here and found out that, especially when calling the process.nextTick with recursion and used setImmediate in the code as well, not only the event loop is left to starve but setImmediate is also being held on. This example illustrates, how important it is to know exactly what is going on with the API you use. Use the official documentation [2], [3] to inform yourself about the difference between these two in detail, when you plan to work with them. The last bastion: Asynchronicity In most cases, the conversion to an asynchronous function from a synchronous one is possible by taking the process.nextTick method. The following example states a mixed-method, working asynchronous and synchronous. Changing the synchronous part to an asynchronous one will bring uniformity and consistency to this function, stating the principle of getting B as an output when putting in A into the function every time. Would you like to get much more detailed information? View the blog post from Isaac Z. Schlueter [5]. Conclusion We should build programming upon stability and consistency. Just because there are variables in the code, you don’t have to make the code itself a variable. A function call has to be reliable and not case dependant. Make your callbacks either pure synchronously or pure asynchronously to avoid inconsistency and built your API upon reliability. In case of doubt, convert your synchronous call into a synthetic asynchronous one. Good for us, that Node.JS knows both of them: process.NextTick and setImmediate.
https://pjdarnold.medium.com/javascript-break-promises-keep-callbacks-4dbf9cff3d9a
['Arnold Abraham']
2020-12-10 09:10:55.330000+00:00
['Technology', 'Programming', 'Web Development', 'JavaScript', 'Coding']
641
Want to learn Robotics?
Want to learn Robotics? What is Robotics? As the term suggest Robotic is the science study of robot. The term robot is comes from Czech word “Robota”. Robota meaning forced labor coined by Karel Capek in 1921. Later Isaac asimov late back in 1940 . A formulated a three laws of robotics. Laws First Law The first law of robotics a state that A robot may not injury a human being or through in action allow a human being to come to harm. Second Law The second law state that a robot must follow the comment of the human being expect where this command does not conflict which the first law. Third Law A robot must protect its own existence as long as such protection does not conflict with the First or second law . A robot is a integrated system made up of sensors, manipulators, control system, power supplies and software which work co- dependently in order to perform task. The study of the robot therefore requires a multidisplinary approach which includes subjects like science physics mechanical engineering, electrical engineering structural engineering, mathematics, computer engineering are involved. All Robots are machine but all machine are not robots because to be robot the need to have a certain characteristics to be as a robot. those characteristics are: Sensing One of the most important aspect of a machine to be called as a robot is its ability to sense the environment in which it is working. The sensing ability of the robot are not very much different from that of humans . They have engineer to have light sensors which perform the function eyes and touch and pressure sensor which perform the function of hand, chemical sensor which perform the function of nose, hearing and sonar sensors which perform the function of the ears and taste sensor which perform the function of tongue and all these sensor together make a robot aware of its aware of its surrounding Movement A Robot has the ability to move locomote from one place to another like a sojourner or either move one of its parts like Canada Arm to carry out the moment of the products. This can be done by the rolling wheels, walking legs or propelling on thrusters Energy A Robot is required to have a power source to carry out around task a robot could gets it energy through solar charging, electrical charging or battery. depending on the robot. Conclusion After reading this blog u guys came to know about what is robotics. how coined robotics. the fundamental laws of robotics. how you can find the difference between machine and robot.
https://medium.com/@jeynadar23/want-to-learn-robotics-8150fcce8f58
['Adith Sreejey']
2020-12-15 03:28:00.710000+00:00
['Technology', 'Robotics', 'AI']
642
Virtual Learning: What is Our Goal?
Photo by Charisse Kenion on Unsplash It’s not what we are good at, teaching through a flat screen. As educators, we know that in-person engagement is key to learning, and multiple intelligences help kids learn. Some kids learn best when moving, others when writing, some when just listening. We must pull all of these elements together in a virtual environment to succeed. Now, we have transitioned to virtual learning and some of us to hybrid learning. Although we have discovered many different approaches to teaching through this disastrous time, don’t throw the baby out with the bath water. While what we normally do in classrooms does not always translate to virtual learning, teachers will be most effective if they continue to use the strategies they know best. In my geography classroom, I typically use a lot of movement. We do songs, dances, and chants to help cement our knowledge of directions and definitions. On my virtual platform, I could create a video of me performing these steps as my method of teaching and post on a platform for students to view. Or, I could skip the idea entirely and find another approach. But, I know the methods I typically use work really well for students. So, why change them? As we find ways to use our new online platforms, we have to be realistic. Students will not follow a “next step” module and learn effectively. Independent learning works when we are pursuing a passion or are interested in the topic. When we are not, however, or if we are confused, we need engagement for questions and conversation. Engagement spurs ideas and innovation. We cannot assume students will even complete modules effectively. I, myself, have been assigned online lessons with videos, and I forwarded through them just to complete the modules quickly. However, if someone had engaged me personally, asked for my ideas or opinions, I would have been a more active learner and actually taken to the content. Regular, in person, school provides accountability that virtual school does not offer. Accountability is not just a checklist. Rather, accountability is being an engaged participant in your learning. It is interacting with others, it is questioning, processing. Not just completing a module “to-do” list. That is just task completion. We are not trying to teach task completion, or we should not be. Rather, we should be teaching collaboration, problem-solving, and innovation. Completing a task-oriented lesson does nothing to build this skill set. Use the tools with which you are familiar. Use the tools you know are developmentally appropriate for students and will engage them in their learning. Adapt, but do not completely change. Do not eliminate paper and pencil. Encourage doodling while you are talking online. Encourage using different colored pens and pencils. Question, discuss, explore. Encourage creativity. Build, create, design. In the past few months, it has become evident that our future needs for these kids to be creative problem-solvers and innovators. It’s up to us to help them.
https://medium.com/age-of-awareness/virtual-learning-what-is-our-goal-3c974f9c567e
['Jennifer Smith']
2020-10-03 01:09:29.321000+00:00
['Virtual Learning', 'Education Technology', 'Innovation', 'Education', 'Teaching']
643
The key to learning fast is looking dumb
A common trait I see in new developers is the fear of looking dumb. I know because I had the same concern. I thought that seeming stupid would cause others to question my capabilities and affect my career progression. Nothing could have been further from the truth. To explain why, let me tell a quick story. One of my first tasks as a software engineer was to investigate a bug for an important customer. They were experiencing timeouts from one of our API endpoints and it was impacting their workload. Since I was still new to the company, I wanted to prove to my team that I didn’t need hand-holding. To show them that they had hired the right person. I spent hours parsing through the source code until I found the root of the problem: an inefficient SQL query. To fix the timeouts, I would need to optimize it. Unfortunately, the query was complex and took some time to understand what it did. But I was determined to fix the issue myself. I wasn’t a SQL expert by any means, but I knew enough. After all, how hard could it be to optimize one SQL query? Very hard, apparently. I spent the rest of that day and the next attempting to improve it. I got stuck and beat my head against the wall several times. But eventually, using Stack Overflow and lots of manual testing, I was able to hack together a solution. It was even more complicated than the original, but it did the job. I felt satisfied and elated as I put together my Pull Request and sent it to my teammate to review. The happy feeling didn’t last long. Upon opening my Pull Request, my teammate noted that there was a much simpler solution. My face immediately flushed beet red. All that hard work only to end up with egg on my face. Reluctantly, I asked if he could explain his simpler solution. I worried that he was going to scoff at my ineptitude, but he was more than happy to walk me through it. When we benchmarked it, it executed much faster than my Frankenstein solution. My teammate was patient and understanding. He even gave me some resources I could use to follow up and learn more. Still, I was embarrassed to have wasted two days of work. What made it worse was I had considered asking for his help several times over those two days. I had decided against it each time because I didn’t want to look dumb. I ended up looking even dumber as a result. When I expressed this to him, he was sympathetic and said something that stuck with me. Next time, don’t hesitate to ask about anything you don’t understand — we don’t expect you to know everything. It’s better for me to spend 10 minutes explaining something than to have you running in circles for hours. It’ll save everyone time in the end. Photo by Kaleidico on Unsplash While this story is specific to me, many new developers are going through similar ordeals. And it’s understandable. When I started my first job, everyone around me seemed like an expert. It was intimidating as hell. I thought the best way to get to their level was to persist and struggle on my own. After all, as a self-taught developer, that’s what I was used to. The truth is, if you want to get to the level of your peers, you need to ask for their help. If this sounds like obvious advice, it’s because it is. Many of us already know this intuitively. But it’s important to be reminded of it. Common knowledge is rarely turned into common practice. Have you ever stopped yourself from asking for help? Have you refused to ask for assistance even when the situation kept getting worse? There’s a good chance that you have. And why did you stop yourself? Maybe it wasn’t a good time to ask. Maybe you didn’t want to be bothersome. Or maybe, if you’re like me, fear played a big factor. However, this fear is holding us from back from an essential part of learning: feedback. Photo by Charles 🇵🇭 on Unsplash Feedback is crucial In this context, I define feedback as any advice you get from a peer. It can come in the form of a teammate reviewing your code, brainstorming together about a particular bug, or simply asking them, “What does this do?” While it’s true that self-study is important, it’s only one aspect of learning. Feedback is another vital piece of the puzzle: It has long been recognized, by researchers and practitioners alike, that feedback plays a decisive role in learning and development […] We learn faster, and much more effectively, when we have a clear sense of how well we are doing, and what we might need to do in order to improve. And yet, it’s tough to ask for feedback. Receiving negative criticism can feel like a personal insult. You may feel like you “should know this already” and asking for help will be an admission of your naivety. But struggling with a task in isolation leads to stress, and stress makes you stupid. Being unwilling to look stupid in the short term will only make you appear more foolish in the long run. So instead of ruminating on the momentary unease, focus on your long term goal of mastery. The sting of negative comments are fleeting but the benefits of learning are timeless. Immediate Results The best thing about feedback is that the results are immediate. You will instantly know more than you did before. Consistently asking for feedback will make the pace of your learning skyrocket. You’ll find yourself asking questions much more often. Take it from me. Searching for the phrases What does that do? or What does that mean? in my Slack search reveals the litany of “stupid” questions I’ve asked. Did I feel dumb asking those questions? Oh yeah. Did I learn a lot because of it? Absolutely. Asking questions makes you look smarter If I haven’t convinced you yet, then I’ve saved the best for last. The truth is that asking questions won’t make you stupid. Quite the opposite: We actually view people who seek our advice as much more competent than people who forego the opportunity to seek advice. This is because being asked for advice is flattering, it feels good. They’re asking for my advice because they think I’m smart and I know the answer. I think they’re smart because I’m going to tell them things that will be useful and help them do the task better. Asking peers for their input makes them feel important and needed. It turns out, people like to feel that way. And in return, they view you as competent for having the courage to ask. It’s a page straight out of Dale Carnegie’s How to Make Friends and Influence People. Lastly, you will be a godsend to those who are struggling with the same problems. More than once have I asked for help on a specific issue only to have another engineer privately thank me for asking. They had been struggling with the same issue but were too self-conscious to ask. Everybody is a “junior” in something. In the end, the fear of seeming stupid is exactly that: a baseless fear propped up by insecurity. Not only will asking for feedback and showing vulnerability make you appear more competent, but it will also make you competent. So the next time you find yourself stuck, do yourself a favor and ask for help. You may feel dumb now but give it time. They’ll be one seeking your advice soon enough. More From Me:
https://medium.com/free-code-camp/sometimes-the-key-to-learning-fast-is-looking-dumb-9166fb78c234
['Sun-Li Beatteay']
2019-05-16 20:22:01.020000+00:00
['Software Development', 'Programming', 'Learning', 'Technology', 'Software Engineering']
644
Why you should care about Continuous Integration ?
Continuous integration (CI) is a practice in software engineering where members of a team integrate their work frequently, with each person usually integrating at least once daily, leading overall to multiple daily integrations. Each instance of integration into the shared mainline is verified by an automated build (including a test), which enables quick detection of integration errors. CI is an approach that, many find, leads to measurably reduced integration problems and allows a team to develop cohesive software quicker. The following is an overview of the practice and an argument for its implementation. But first let’s take a quick glance at our own history and how we got to using CI in our own work. Our story After we organised the first AngularJS meetup in Poland in 2014, we tweeted about it and a couple of weeks later a big research company from Germany (with offices spanning all of Europe) pinged us on Twitter. Soon after, one of the questions that company’s CTO asked us was whether we used continuous integration. Back then we did not. We were a small team and wanted to focus on delivering our projects and not spend weeks on figuring out our company’s internal processes. This, and specifically us not using CI, is why we didn’t get the gig. They decided to go with someone else and thought we were not organised in a way that would/could satisfy a big client. This of course got our CTO to start thinking about CI, and this year we have finally implemented it in our company for all current projects. But why was that research company so insistent that their subcontractors use CI? What are some of the main characteristics and advantages of continuous integration? CI implements and utilises continuous processes of applying quality control. These processes are regimented into small pieces of effort, applied frequently, sometimes as frequently as possible. By thus replacing the traditional practice of applying quality control only once all development has been completed, CI aims at an improvement in quality of the produced software and a reduction in the time of its delivery. This is attempted by organising work in such a way that integrating new or changed code is done frequently enough that no big-enough window remains between commit and build, and therefore optimally no errors can arise in the process without the developers spotting and correcting them immediately. The way to set this up is by triggering builds with every commit to a repository instead of just with a periodically scheduled build. In a multi-developer environment replete with rapid commits this has the advantage of triggering a short timer after each commit, and then starting a build either when the timer expires or following a longer interval since the previous build. This is done with automated tools such as CruiseControl, Jenkins etc. which offer scheduling of this sort by default. CI requires a version control system with atomic commit support, meaning that all of the developer’s changes can be seen in terms of a single commit operation, making all the changes included in every subsequent build. To this end, the build needs to complete quickly, so that all the problems with integration are quickly identified. Conclusion Continuous integration is not all easy since the initial setup requires time. A well-developed testing suite is also needed to achieve its automated testing advantages. Finally, hardware costs for build machines can also be significant. But once these investments are made, the advantages of CI become evident over time and make implementation well worth the initial hassle. Some of these advantages of CI include continuous detection and fixing of integration problems, avoiding last-minute chaos of release dates; early warnings of broken or incompatible code and other conflicting changes; immediate unit testing for all changes; availability at all times of an up-to-date build for demo, testing and release purposes; and immediate feedback available to developers on the functionality, quality, and overall impact of their code. Many teams using CI report that the advantages easily outweigh the disadvantages in practice and that the ability to detect and fix integration bugs early, saves both time and money throughout project development. Continuous integration in a nutshell: Advantages continuous detection and fixing of integration problems, avoiding last-minute chaos of release dates early warnings of broken or incompatible code and other conflicting changes; immediate unit testing for all changes availability at all times of an up-to-date build for demo, testing, and release purposes immediate feedback available to developers on the functionality, quality, and overall impact of their code. Disadvantages since the initial setup requires time well-developed testing suite is needed to achieve CI’s automated testing advantages hardware costs for build machines can also be significant. This article was originally taken from Briisk Blog.
https://medium.com/briisk/why-continuous-integration-should-be-your-priority-8a1fccdc787d
['Kamil Augustynowicz']
2017-12-11 12:46:27.417000+00:00
['Technology', 'Continuous Integration', 'Software Development', 'Software House', 'Web Development']
645
Pay Anywhere with Mobile Wallets. Today, we’re introducing Instant…
Today, we’re introducing Instant Provisioning (also known as Mobile Wallet). This feature gives cardholders the ability to safely and conveniently link their debit card to their mobile wallet without manually inputting their card details, giving cardholders the ability to connect their Synapse powered debit cards to their mobile devices in one click. According to eMarketer, the number of mobile payment users is increasing in the US. Users grew from 58.7 million in 2018 to 64.0 million in 2019 and is projected to grow to 69.4 million by 2020. More and more consumers are looking for the ease and speed of using a mobile wallet, and now with Synapse’s Instant Provisioning, your card program can support their aspirations. Instant Provisioning gives customers the ability to connect to Google Pay, Apple Pay, or Samsung Pay wallet and agree to the terms of service linking directly to their Synapse powered debit card to their mobile wallet. Like to learn more? Visit our blog.
https://medium.com/synapsefi/introducing-instant-provisioning-pay-anywhere-with-google-pay-apple-pay-and-samsung-pay-mobile-5689813e227e
['Jason To']
2020-12-09 22:34:58.528000+00:00
['Banking Technology', 'Mobile Wallets', 'Fintech']
646
Andy the Anatomical Sidekick Turning Lauren’s Frown Upside Down
Humans are irrational. At least, a robot would think so… for now. PhD researcher Lauren Fell is working on a framework that will help robots understand how humans make decisions based on facial cues. “Human judgements are difficult to explain probabilistically because things like cognitive biases make us unpredictable,” Lauren said. “We don’t really know exactly how we decide to trust or not trust someone when we first see their face, for example.” Lauren is applying quantum cognition to social perceptions — using theories of quantum physics to explain cognitive phenomena, specifically how we perceive faces. She is even building herself a side-kick to help — Andy. Lauren Fell is a PhD researcher in the QUT Information Systems School. She using quantum physics theories to explain cognitive phenomena, specifically how we perceive facial expressions. She also taught herself how to build and program robotic soft muscle to develop Andy the Anatomy Bot, a prototype head that mimics human facial expressions. “’Andy’ is an anatomical robot I started building as a side-project, but now think he will be useful for my research,” Lauren said. Andy is a prototype anatomical robot face with silicon muscles and air valves, designed to mimic human facial expressions. Lauren and Andy will teach people how to make soft muscles they can take home with them from Robotronica — QUT’s free robotics festival at the Gardens Point campus on 18 August, 9am — 4pm. “It was actually just an idea I had one day — I was looking into pneumatic muscles for a freelance project I was working on and thought it might be interesting to make a robot with all the same muscles as a human face. “When the call for expressions of interest came through for Robotronica late last year, that gave me the motivation to actually try it out and I’ve been working on it ever since. Lauren completed her undergraduate degree in psychology — not robotics — and taught herself how to make soft robotics using code, silicon moulding, air valves and 3D printing. She also learned about facial anatomy and which muscles were used in different expressions, such as smiling. “I started by learning how to design the muscles themselves. The construction of the muscles using silicon and 3D printing was actually the harder part of Andy’s development,” Lauren said. Giving Andy facial expressions was trial and error, according to Lauren. “Coding was not that complex. I just needed to make sure air was getting to the right muscle so it expanded and contracted the way a real muscle would.” “His expressions are not large at the moment because he is just a prototype but he will have stronger muscles in the future. “It’s been a really fun learning experience and my PhD has actually gone in the direction of looking at how people judge personality traits from faces, so I might even use Andy for that research which might be interesting.
https://medium.com/thelabs/andy-the-anatomical-sidekick-turning-laurens-frown-upside-down-925bfc9afa2d
['Qut Science']
2020-09-24 00:14:33.397000+00:00
['Engineering', 'Robotics', 'Quantum', 'Technology', 'Psychology']
647
Enterprise How to tap the true value of 80% unstructured data?
Modern enterprise applications in the digital economy will generate a variety of data, including operating system logs, network and security device logs, business access and transaction data, etc. These data are very large in volume and in different formats. Big data analysis concept Big data analysis refers to the analysis of large-scale data. Big data can be summarized as 5 Vs, with large data volume (Volume), fast speed (Velocity), multiple types (Variety), value (Value), and authenticity (Veracity). Big data is the most popular IT industry vocabulary nowadays. The subsequent use of data warehouse, data security, data analysis, data mining and so on around the commercial value of big data has gradually become the focus of profit sought after by industry professionals. With the advent of the era of big data, big data analysis has also emerged. Challenges facing big data analysis IDC data shows that at present, structured data in enterprises accounts for 20% of the total data volume, and 80% is unstructured data. How to mine the true value of 80% of the data, including the hidden risks and commercial value, has become a big The main goal of data analysis. 1. Data island There is a lot of information in logs, network traffic and business process data, but they are isolated from each other, and these logs need to be effectively correlated. 2. A wide variety of log formats Different systems and devices have different log formats. Especially when dealing with unstructured and semi-structured data, how to quickly normalize various types of logs is the primary problem for big data analysis. 3. Department independence Security, business, and operation and maintenance are independent of each other, and their respective concerns and perspectives are different. It is difficult to meet the needs of multiple operation and maintenance responsibilities of business operation and maintenance departments in many enterprises on the same data analysis system. 4. Retrospective analysis and evidence collection Security incidents occur frequently, and I want to learn from the incidents, how to effectively conduct retrospective analysis and obtain evidence. 5. Fast iteration Once the analysis system is developed, it is difficult to make adjustments to data sources, analysis dimensions, report presentation, etc., to adapt to new requirements. 6. Regulatory compliance The cyber security law and the security system all put forward relevant requirements for log retention and security situation awareness. “Faced with massive amounts of data, whoever can better process and analyze the data can truly seize the opportunity in the era of big data.” This is almost the consensus of everyone in the industry. The analysis of massive data has become a very important and urgent need of enterprises and governments. Holographic data perspective solution The holographic data perspective system launched by Pangeo Technology is an analysis platform for multi-source heterogeneous massive data. It can collect, organize, archive and store machine data in any format, and provide data analysis, search, and Reporting and visualization capabilities. For logs from various security devices, network traffic, personnel behavior data, host application logs, and business system data, through intelligent analysis methods such as correlation analysis, user behavior analysis, and machine learning, it can provide business security threat monitoring, applications, and network Security attack monitoring, operation and maintenance security monitoring and analysis functions. Main functions of holographic data perspective 1. Security threat analysis Analyze and present the current attack situation of the system. Including: malicious accounts, IP addresses, attacked systems, etc. controlled by hacked products; 2. Business threat perception Perceives, analyzes and presents behaviors that cause account and data leakage risks such as database collisions and crawlers, as well as business fraudulent behaviors such as swiping orders and spikes; 3. Application security audit Audit abnormal account operations and access behaviors; 4. Operation and maintenance monitoring support Provide support for operation and maintenance work such as application operation resource monitoring, application access analysis, and operation analysis; 5. Customizable security visualization Large screen display can be provided for real-time monitoring of the monitoring center. Core advantages of holographic data perspective 1. Full-view monitoring 2. Unstructured index 3. Search engine query 4. Drag and drop dashboard 5. Excellent support platform 6. Built-in a variety of mathematical statistics and machine learning models 7. Enterprise-level architecture Conclusion Big data is everywhere, and data analysis has become crucial in this era of big data. This is also the key to realizing the commercial value of big data. Only good data analysis can control the lifeblood of industry development in advance. The initiative of the market thus creates greater value.
https://medium.com/dataprophet/enterprise-how-to-tap-the-true-value-of-80-unstructured-data-3f029497677
['Sajjad Hussain']
2020-10-19 07:45:42.988000+00:00
['Technology', 'Data Science', 'Data Visualization', 'Big Data', 'Data']
648
What a “Secretary Pete” Buttigieg Means for DOT Innovation
Yesterday it was made official. President-elect Joe Biden selected his one-time opponent for the Democratic presidential nomination and former Mayor of South Bend to serve as the Secretary of Transportation. Both progressives on the left and conservatives on the right will have their qualms about Mayor Pete becoming Secretary Pete — check out Twitter if you don’t believe me. But soon-to-be “Secretary” Pete’s South Bend track record and proposed national transportation policy proposals present hope for people in the transportation and mobility space who have sought innovation-centric leadership at the Department of Transportation (DOT). This need for strong leadership and new approaches comes at a pivotal time. The pandemic has laid bare the embarrassing and underfunded state of our public transit systems and infrastructure. And it’s no surprise that the American Society of Civil Engineers has continued to warn both elected officials and the public of our crumbling infrastructure and the looming disastrous impact on safety, national security and our economy. Moreover, Congress has been unable to come together and pass bipartisan legislation for the 21st century transportation policy that everyone in the United States needs and deserves. And despite all these problems, we are living in the golden era of big ideas and innovation in transportation and mobility — from rapid advancements in autonomous and semi-autonomous vehicle technology to scaled adoption of micro mobility solutions, to a sea change in electric vehicle demand, to moonshots like Hyperloop to a public debate on parity of transit funding! Secretary Pete is the right person to take on the herculean task of embracing and leveraging these exciting innovations to chip away at the immense transportation, mobility and infrastructure problems we face. Here’s a preview of the exciting ways that Secretary Pete may bring innovations to the Department of Transportation. Photo Credit: The Washington Times Prioritizing systems-based innovation Secretary Pete was early to champion innovation and performance management within his administration at South Bend. He hired Santiago Garces initially as a Performance and Innovation Manager to reform the city’s key performance indicator tracking system, which ultimately led to Santiago being hired as the City’s first Chief Innovation Officer — a unique role for a small US city in 2015. This new position led to the systematic overhaul of IT services, the formation of Business Analytics group and a Civic Innovation Department and slew of adopted innovations that were both developed in-house and adopted from external vendors. It’s one thing to elevate the role of innovation and technology to the Mayor’s cabinet, it’s another thing to welcome and encourage the full-scale modernization and re-thinking of government systems ranging from public works to 311. If Secretary Pete implements even a fraction of his bold approach to systems change at the DOT, then he will reshape how the federal agency embraces and promotes innovation and technology as a standard practice. Climate change and the environment On the presidential nominee campaign trail, Secretary Pete announced a $1 Trillion plan to address America’s crumbling infrastructure. That ambitious plan included proposals specific to tackling climate change and environmental issues — ranging from billions in federal funding for modernization of flood protection systems for communities impacted by climate change to new regulations aimed at incentivizing smart electric grid technologies across the US. It also included $150 billion to improve public transportation with a majority of that funding to go to repairing and expanding existing rail and bus services. How would this bold policy plan be paid for? The bulk of the expense for the transportation and infrastructure plan was covered by a $1 trillion climate change plan that included $200 billion in investments in clean energy R&D, a $250 billion Clean Energy Bank for financing new clean-technologies and an additional $50 billion seed fund to help catalyze riskier experimental green tech. Of course, these policy proposals may never see the light of day, but it’s clear that Secretary Pete will mobilize the full force of the DOT to combat the massive challenges of climate change in America. A focus on the little guy Most folks know at this point that Secretary Pete was once Mayor Pete of South Bend, Indiana — population 100,000. As Mayor of a small midwestern city, Secretary Pete championed and served as a genuine amplifier for the hard work of small towns and cities beyond the usual big city circuit of New York, Los Angeles, Chicago, etc. His ability to resonate with leaders of small cities and towns across the US became obvious when local elected leaders from California to Pennsylvania endorsed him for the Democratic nominee for president. Secretary Pete is likely to not forget his small city roots or those who supported his candidacy when taking this new role. He is uniquely suited help the DOT better deploy much-needed federal resources, programs and attention to the small towns and cities that millions of Americans call home. Lions, Tigers and Complete Streets Oh My As Mayor of South Bend, Secretary Pete selected a team of city employees to participate in the Safe Street Academy. The result of which was community-centered research and the implementation of complete street and traffic calming strategies in South Bend. The project implemented short-term pilot projects to try and slow dangerous traffic and create alternative mobility options for community members. Secretary Pete has seen firsthand how such strategies to form complete streets can dramatically improve safety. It’s possible that Secretary Pete could help champion such approaches in the DOT and unlock new resources and funding for more local governments across the US to adopt similar strategies to those that helped make South Bend’s street safer for everyone. Silicon Valley’s Secretary Secretary Pete views the world through tech lenses. This belief became apparent when Silicon Valley coalesced around his presidential campaign. To the chagrin of many progressives, he built relationships and received contributions from tech leaders ranging from Mark Zuckerberg of Facebook to Reed Hastings of Netflix and many others in Silicon Valley’s Big Tech and startup scene. While one may negatively question his coziness to technology companies, I would argue that Secretary Pete is well positioned to leverage his relationships and mobilize the private sector tech community to be an integral part of the 21st century transportation, mobility and infrastructure solutions we need. In this way, Secretary Pete could jumpstart new private sector investment towards infrastructure and transit tech that the federal government would never be able to accomplish on its own. Welcome mat for start-ups & infrastructure tech While running for president, Secretary Pete may have been embraced by Silicon Valley tech leaders, but during his time as Mayor of South Bend it was he who embraced infrastructure tech and startups to help solve major infrastructure challenges. Two such examples include the infra-tech startup companies EmNet and RoadBotics,. While EmNet was adopted by South Bend before Pete was Mayor, he championed the tech’s value nationally and shared the impact it had on South Bend to prevent more than 1,500 gallons of combined sewage entering the local St. Joseph’s River. As Mayor, he did initiate the adoption of RoadBotics’ AI pavement assessment software to receive in-depth data about the potholes and surface condition of its 550 mile road network (full disclosure, I previously led the Growth team at RoadBotics). If Secretary Pete maintains his risk tolerance for testing and adopting new tech from startups like EmNet and RoadBotics, then it’s likely that we will see more programs and funding geared towards connecting startup solutions to the DOT as well as creating more opportunities for state and local agencies to adopt impactful infra-tech. A final thought: Equity-based transportation When publishing his $1 trillion infrastructure campaign proposal as candidate Pete, he defined it as a plan focused on opportunity, equity and empowerment. And in a tweet publicly announcing President-elect Joe Biden’s offer to be Secretary, he made a similar nod to enhance equity for all. What this rhetoric means in practice is still to be determined. But, it is certainly important to have the Secretary of Transportation who publicly acknowledges the systemic inequities built into the very fabric of your transportation, infrastructure and mobility systems. Admissions is a first step. But, real meaningful ‘innovation’ at DOT will be bringing forth policies and programs that tackle the challenge of inequities in American transit head-on. I think Secretary Pete is up for the job.
https://medium.com/@ryanlg53/what-a-secretary-pete-buttigieg-means-for-dot-innovation-237c9088bf63
['Ryan Gayman']
2020-12-16 03:18:13.952000+00:00
['Startup', 'Biden', 'Technology', 'Innovation', 'Transportation']
649
Multiplayer, IRL
Earlier this year, we hit the road to meet Figma users in person at our Design System Meetups. In 8 cities across 4 continents, our community came out in force to share lessons and stories with one another. Conversations ran the gamut and covered topics far deeper than basic tips ‘n tricks. People debated the most effective ways to run design critiques, talked through the pros and cons of open design culture and forged new partnerships with each other for work. We learned a lot during these gatherings, and others felt the same. That made us wonder: How do we empower our community to connect in person more often? We spent the last few months figuring out what that could look like, and today we’re excited to introduce the first step: Figma Local Communities around the globe. We’re launching with groups in 20+ cities (and adding new ones every day). A sampling — Accra, Amsterdam, Berlin, Boston, Copenhagen, Lagos, London, New York, San Francisco, Seattle, Tel Aviv. You can see the full-list here. What do you want to see in your city? We picked these cities through a combination of art and science. Some places have evangelists already throwing Figma-related meetups. Others have a particularly concentrated Figma user base. We’re excited to launch with such a wide array of geographies covered, and this is only the beginning. Some of our new Local Communities: Accra, Amsterdam, Berlin, Boston, Copenhagen, Lagos, London, New York We envision these groups being a supportive place where people can connect, learn and share. Sure, that may include Figma best practices and workflows, but also the challenges and successes designers face every day. If you want to get involved, fill out this form to: Start a Figma group in your city Share what you want to see with your Local Community Pitch specific meetups or other events in your area The Designer Advocate team: To make sure your ideas are heard, we created the Designer Advocate role at Figma. We have 3 people in different time zones and geographies to support these Local Communities. Tom Lowry — North America Many of you may know Tom’s work already. While working as a Senior UX Designer at the enterprise company OpenText, he practically moonlighted as a Figma instructor. He wrote many blog posts teaching fellow designers how to get started in Figma and structure their components for flexibility. He even built our Material Design resource kit for our Styles launch. He kept the trend going in his first weeks at Figma, producing an online Skillshare class on building design portfolios. Zach Grosser — Europe Zach was an early communications designer at Square, where he crafted the slides for Jack Dorsey’s big talks, and even designed the company’s IPO roadshow pitch deck. He brought Figma to the Square design team in 2013, making him one of our earliest adopters. After we met him in person (at Figma’s first ever Shindigma!) Zach became a valuable source of feedback on the product, gamely jumping on calls whenever we were testing a new feature. His passion for design education — and a move to Amsterdam — brought him to Figma full-time. Namnso Ukpanah — Africa Namnso first came to our attention when he drafted a 7-page proposal for a Figma design systems meetup in his home town of Lagos. Thanks to his promotion of the event, over 300 local designers and developers showed up to connect with each other and Figma. In the 6 months we’ve worked with him, he’s already onboarded 21 Figma ambassadors across 11 cities and 7 countries (while still working as the Lead Designer at hotel booking site Hotels.ng). We’re still nailing down the particulars of how this will work, but consider these Advocates your boots on the ground.
https://medium.com/figma-design/multiplayer-irl-fa7e19368776
['Claire Butler']
2018-09-27 17:48:19.540000+00:00
['Design', 'Business', 'UI', 'Technology', 'UX']
650
DEVS FOR HEALTH: a Hackathon for Good — Codemotion Magazine
Hackathons have had a long history of bringing people together to develop technological solutions in response to specific problems and challenges. Hackathons can be in-house, focused on a particular technology, a social issue, or policy challenge. Over the last year, we’ve seen a shift from in-person events as COVID-19 has required extensive efforts to reduce the risk of exposure. While virtual events may lack the in-person experience, they present the opportunity to develop tremendous value and rapid innovation development. Over merely days, teams can help deep dive on a specific problem and develop, test, and launch prototypes, which if successful, can lead to full-scale commercial services and initiatives. This year Codemotion recently partnered with Gilead Sciences to create the hackathon DEVS FOR HEALTH in response to the challenges of living with HIV. This document provides an in-depth analysis of the event: the logistics, the challenges, the stakeholders, and the benefits to all parties. It’s particularly relevant to organisations that may be considering hosting, partnering, sponsoring, or contributing to a hackathon but are not sure where to start or how their company can reap the benefits. Why the DEVS FOR HEALTH hackathon? While the treatment of HIV may have progressed significantly over the last few decades, the challenges of detection and treatment still persist. The disease now manifests as a chronic condition for most people in Europe. However, the realities of complex health needs and stigma mean that living with HIV still has a significant impact on a person’s wellbeing. It is estimated that 120,000 people with HIV live in Italy, of which about 18,000 are unaware of the infection. HIV may not show symptoms for a long time and can be diagnosed even after years (on average 4.5 to 5 years after infection. The delay in diagnosis can result in a faster course of infection, less efficacy of therapy and a greater risk of increased transmission of the virus. The COVID-19 epidemic has further made it difficult if not impossible, to offer the test for the diagnosis of HIV and assistance health care of people with HIV. According to Cristina Le Grazie, Executive Director Medical Affairs of Gilead Italy: “ DEVS FOR HEALTH is the new initiative that Gilead Italy has developed at the service of those living with HIV. Innovation for us does not only mean developing medicines that are increasingly responsive to the management of different types of patients, but also putting technology at the service of the community and of those living with the HIV virus. Hence the two themes selected for this initiative: the emergence of undiagnosed cases and the quality of life of patients. The participation of doctors and patients focuses on the needs of the two actors in the fight against this disease to ensure that the technological solutions that will be developed fully meet their needs “. The DEVS FOR HEALTH hackathon challenge “ Intuition and imagination on the one hand, digital skills on the other. An explosive mix to contribute to the definition and implementation of innovative technological solutions in the fight against HIV infection “. Cristina Le Grazie, Executive Director Medical Affairs of Gilead Italy Hackathon participants were invited to develop a specific product or service in respect to particular challenge across a number of issues experienced by people with HIV/AIDS, specifically a mobile app, web app or another service not categorized as a medical device with the choice of two broad umbrella categories of problems to solve with a deeper niche within: The challenge of the undiagnosed and access to HIV care In short, participants were tasked to create an innovative solution on the issues of: Knowledge of infection and testing Finding and integrating information on the state of the infection in Italy Facilitating access to the test Simplifying the procedure for initiating and maintaining therapy This could specifically focus on: Health education / Health promotion: promote awareness of risky behaviours among the general population and the so-called key populations (MSM; male, female, trans prostitution; prisoners; drug addicts, young people, etc.) and the importance of timely testing in the event of such behaviour. Access to HIV testing: Solutions that facilitate access to the test: Where to perform it Which test to do Speed of execution Obtaining the result The expansion of places where testing can be performed Cultural and operational barriers in the population and in health workers Data integration Italy lacks a University accessible central site of AIDS epidemiology. Participants were tasked to design solutions based on the following categories that promote the integration of information for a better and rapid knowledge of the state of the infection in Italy and for better management of information on individual patients. Devise solutions that promote the removal of obstacles that prevent or slow down the initiation and maintenance of therapeutic treatment, both from a bureaucratic organizational point of view and that linked to other factors. Quality of life This category sought innovative solutions to improve the quality of life of HIV-positive patients, who face personal and social challenges as well as those related to the management of the now chronic infection and obstacles to therapy. Stigma: Devise solutions that can alleviate the isolation of the person with HIV and/or the stigma still associated with the person and the infection in general. Management of chronic condition: Devise solutions that assist doctor/patient involvement and awareness in infection management specifically: Continuity of the doctor-patient relationship in the periods between one visit and another aimed at managing the infection and concomitant diseases of HIV and consequent to ageing (e.g. diabetes, hypertension) Optimization of control exams for monitoring the state of seropositivity and for concomitant pathologies to avoid excessive medicalization Protection of the privacy of seropositivity with the increase of checks due to the chronicity of the infection and concomitant diseases Health discrimination Access to drug withdrawal Devise solutions that can facilitate the withdrawal of drugs for HIV therapy. The prize Each member of the two winning teams won Amazon vouchers worth € 3,000, in addition to being invited to the DEVS FOR HEALTH boot camps, five days of technical and training support to transform the idea into a concrete digital solution in the fight against HIV. The DEVS FOR HEALTH hackathon timeline June 4: Registrations opened (This included the early stages of forming groups and initial mentoring delivered through a dedicated Discord channel June 15 — June 30: formalise the composition of teams and dedicate focus to their proposal June 30 — July 12: Period of intensive work including compulsory mentor check-ins to monitor the progress and completion of tasks July 12: The deadline for sending the project proposals. Afterwards, the projects were sent to the Technical Jury, which evaluates them and pre-selects the best ones. July 20: Announcement of the four preselected teams who will compete for the prizes and participate in the final event The Teams preselected by the Technical Jury compete for the awarding of prizes and will participate in the final event. September 10 Online final winners announcement event. This event included: Welcome and institutional greetings Video-pitch of the four finalist projects and Q&A session Inspirational pitches and project evaluation Winners announcement September 19: Beginning of 5 day Bootcamp Day 1: Business canvas, early adopters identification, business idea validation with industry experts, user stories Business canvas, early adopters identification, business idea validation with industry experts, user stories Day 2: Functional and graphic mockup, possible rethinking, mockup validation with industry experts Functional and graphic mockup, possible rethinking, mockup validation with industry experts Day 3: MVP MVP Day 4 and 5: Delivery and prototype demo The value of a 5-day DEVS FOR HEALTH bootcamp Unlike many hackathons, things didn’t end with the awarding of the winners. Winning teams were invited to attend a five-day bootcamp where they received the technical support necessary to transform ideas into prototypes and then into functioning services. This gives teams the ability to continue to work to enhance their respective projects and take advantage of the mentorship of experts, including clinicians and people living with HIV who provide valuable support and insight. The need to understand the problems the hackathon is trying to solve “ We thought it useful to support the participants with information, content and materials previous to the hackathon, that could help them to address better the topics of the challenges — videos and online resources “. Silvia Rossi, project manager, Gilead Hackathon for Codemotion. It’s critical that any tech is created with the end-user in mind — even more so in the healthcare when part of the challenge is to get people to engage in the first place. A more technical audience may not possess the necessary lived knowledge of HIV or health science as a discourse. In response, DEVS FOR HEALTH made a concerted effort to provide critical resources to aid hackathon participants to have data-driven and patient-centric knowledge. This included: Documentation about the science and challenges of HIV Interviews with leading experts in HIV specific to the challenges including Franco Maggiolo , Head of US HIV-Related Pathologies and Experimental Therapies, ASST Papa Giovanni XXIII, Bergamo and Daniele Calzavara, Coordinator Milan CheckPoint Coordinator and living with AIDS , Head of US HIV-Related Pathologies and Experimental Therapies, ASST Papa Giovanni XXIII, Bergamo and Daniele Calzavara, Coordinator Milan CheckPoint Coordinator and living with AIDS An analysis of existing data tools created around HIV. — This is particularly valuable in avoiding the temptation by teams to ‘reinvent the wheel’. The winning proposals of DEVS FOR HEALTH The challenge of the undiagnosed and access to care: fHIVe fHIVe aims to create a mobile App that provides practical answers to the needs of the population aged 18–35 on the subject of HIV. The dissemination and use of the App will increase awareness and early diagnosis of HIV in a young population in which prevention and early treatment significantly improve the quality of life. Quality of life challenge: UNLOCK 4/90 Unlock 4/90 is a smart locker service that facilitates the withdrawal process of antiretroviral therapies. Unlock enables HIV-positive patients to programme, via a mobile app, the day and time of the withdrawal of drugs in a hospital of their choice and carry it out in total privacy. The hospital pharmacist can receive the withdrawal request and place the drugs in one of the drawers of a computerized locker located near the hospital pharmacy itself. Special mention: PGP Medical Card PGP Medical Card involves the creation of a system for the exchange of sensitive medical information in a confidential and protected manner between the person and the health system. The data is encrypted using a general key belonging to the health service and can only be accessed through a system of 2 QR codes. Why join a hackathon? There are many great, tangible reasons for joining a virtual hackathon: Increase your experience working in groups Meet and collaborate, solve problems, share skills and help build better products Make friends and connections — you might find your new colleagues or housemate Gain confidence in explaining your ideas to new people Gain mastery of virtual tools and working online with a group of people Learn new skills and hacks “ In some fields, and in particular in Italy, there are few opportunities to implement a real open innovation project: this hackathon has been the first step of a collaboration between a promoter involved in scientific research on one side, and the innovation brought by developers on the other. The opportunity offered by boot camps (i.e., the second important part of this project) will be a further step towards the development of the ideas towards prototypes “. Silvia Rossi, project manager, Gilead Hackathon for Codemotion. Hackathons are for everyone One thing that is often overlooked is that hackathons are just not for developers. You might need writers who can translate tech concepts into practices, designers who can share charts and graphics, and financial folks who can set a potential budget of costs in case you win the funds to finance your idea. Anyone who might be an end-user of whatever you are planning to build will have valuable insights and a perspective you may not have considered. “ We are always working to improve and offer a better experience to our users. It has been challenging to succeed in making this opportunity accessible to our target users: we have been working with all the project’s stakeholders to define the goals and make them clear and reachable in terms of their expected outcome. To do that, we had to work on both sides: with the promoter to find out and define the challenges, with the developers to support them at our best with our mentorship and resources “. Silvia Rossi, project manager, Gilead Hackathon for Codemotion. According to Silvia Rossi, project manager of the Gilead Hackathon, there are many benefits for a company to get involved in a hackathon including: Collect new ideas that can be useful to address an issue or stimulate the development of new business models Discover and get to know new professionals for future collaborations Launch (or re-launch) promote, enhance or boost the brand Promote a brand new kind of event for marketing purposes An opportunity for the company to test a specific technology or promote the use of a particular tech product Gilead Sciences told about their experience with this hackathon: “Our commitment to fight against HIV has always been strong on all fronts, from prevention to therapy, to offer a good quality of life of those living with HIV. Our goal is to find the cure through innovation. That’s why we have launched DEVS FOR HEALTH, a truly innovative project whose wide scope embraces many sensitive areas, from raising awareness among the global population to improving the quality of life of the patients. We have chosen Codemotion as our partner for DEVS FOR HEALTH as we were convinced it was the best technological partner to engage the digital professional we needed. AWe hope that the 3 projects will continue to develop with Codemotion’s digital support and may soon become ready-to-use solutions having a positive impact on how we manage this infection”. ll 3 winning solutions have met very high standard confirming the fact that we have made a good choice. How Codemotion can help your hackathon Codemotion consists of an enthusiastic team of professionals experienced with running events both in-person and virtual. As a result, we can help manage your hackathon to ensure the process and winning teams meet with your expectations. Specifically, we can assist with: Establish a cross-company organising team, including corporate, academic, and community representatives. Assistance with developing the themes and specifics of your hackathon Create and host a central platform to upload all the information relating to the hackathon, including timelines, guidelines and other resources. Community outreach and access to participants through our extensive community of developers, data scientists, designers, technical writers, CXOs and other technologists Assistance with corporate outreach and enlisting mentors Resourcing and supporting mentors throughout the hackathon to support their teams Live event coordination Media outreach Post-participant survey and additional communication such as establishing groups, newsletters or other channels as appropriate Post-event analysis Creation of a contacts database of stakeholder, participant and event attendees for future marketing opportunities If you have any queries about our hackathons, please contact us. About Gilead Gilead Sciences was founded in 1987 in California by the will of a group of researchers to bet on scientific research to develop drugs capable of changing the course of very serious diseases that still afflict humanity. Among these, in particular, HIV / AIDS. Focusing on research since the beginning has allowed the development of over 24 innovative drugs, in addition to the 32 molecules in various stages of clinical development in 5 different therapeutic areas: HIV / AIDS, Hepatitis C, respiratory and inflammatory diseases, hematology and oncology. Gilead is now present in over 35 countries around the world. Thanks to the work of over 11,000 employees and collaborators, it continues to be at the forefront of innovation: cell therapy in oncohematology. Commitment to the fight against HIV In Italy since 2000, Gilead has its operational headquarters in Milan and counts on the value and professionalism of over 200 employees. Since its founding, Gilead has never stopped considering HIV infection as one of the global health emergencies on which to focus its scientific and social commitment. Gilead’s scientific research has led to increasingly effective therapies that have transformed HIV from a debilitating and deadly infection to a chronic and manageable disease today. Through close collaboration with physicians, patient associations and public institutions, Gilead has carried out important initiatives for the prevention and early diagnosis of HIV. In addition, it promotes and supports programs to improve the quality of life and therapeutic assistance of those affected. Devs for Health is another piece of the company’s commitment to fighting the virus.
https://medium.com/@catelawrence/devs-for-health-a-hackathon-for-good-codemotion-magazine-65926953e7c4
['Cate Lawrence']
2021-01-11 11:52:07.167000+00:00
['HIV', 'Tech For Good', 'Hackathons', 'AIDS', 'Technology']
651
It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable.
It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. Nadal vs Tsitsipas Live TV Nov 19, 2020·6 min read Life is a journey of twists and turns, peaks and valleys, mountains to climb and oceans to explore. Good times and bad times. Happy times and sad times. But always, life is a movement forward. No matter where you are on the journey, in some way, you are continuing on — and that’s what makes it so magnificent. One day, you’re questioning what on earth will ever make you feel happy and fulfilled. And the next, you’re perfectly in flow, writing the most important book of your entire career. https://www.deviantart.com/ncflive/commission/Nadal-R-Tsitsipas-S-live-score-video-stream-1410905 https://www.deviantart.com/ncflive/commission/Rafael-Nadal-vs-Stefanos-Tsitsipas-live-stream-1410906 https://www.deviantart.com/ncflive/commission/ATP-Finals-2020-live-Rafael-Nadal-vs-Stefanos-Tsitsipas-live-stream-1410907 https://www.deviantart.com/ncflive/commission/Nadal-vs-Tsitsipas-LIVE-STREAM-REDDIT-1410908 https://www.deviantart.com/ncflive/commission/Stream-official-Nadal-vs-Tsitsipas-LIVE-STREAM-REDDIT-1410909 https://www.deviantart.com/ncflive/commission/LiveStream-Nadal-vs-Tsitsipas-Live-Online-1410910 https://www.deviantart.com/ncflive/commission/Livestream-Rafael-Nadal-vs-Stefanos-Tsitsipas-Live-1410911 https://www.deviantart.com/ncflive/commission/Nadal-vs-Tsitsipas-Live-Stream-Free-1410912 https://www.deviantart.com/ncflive/commission/Nadal-vs-Tsitsipas-live-stream-How-to-watch-ATP-Finals-2020-match-onl-1410913 https://www.deviantart.com/ncflive/commission/LIVE-Tsitsipas-vs-Rublev-Live-Stream-Free-Tennis-Final-2020-1410914 https://www.deviantart.com/ncflive/commission/LiVE-Rafael-Nadal-vs-Stefanos-Tsitsipas-1410915 https://www.deviantart.com/ncflive/commission/Live-Rafael-Nadal-vs-Stefanos-Tsitsipas-LIVE-STREAM-1410916 https://www.deviantart.com/ncflive/commission/Live-Rafael-Nadal-vs-Stefanos-Tsitsipas-LIVE-STREAM-2020-1410917 https://www.deviantart.com/ncflive/commission/Live-Rafael-Nadal-vs-Stefanos-Tsitsipas-LIVe-STREAM-1410918 https://gitlab.com/gitlab-org/gitlab/-/issues/285245 https://gitlab.com/gitlab-org/gitlab/-/issues/285246 https://gitlab.com/gitlab-org/gitlab/-/issues/285248 https://gitlab.com/gitlab-org/gitlab/-/issues/285249 https://gitlab.com/gitlab-org/gitlab/-/issues/285250 What nobody ever tells you, though, when you are a wide-eyed child, are all the little things that come along with “growing up.” 1. Most people are scared of using their imagination. They’ve disconnected with their inner child. They don’t feel they are “creative.” They like things “just the way they are.” 2. Your dream doesn’t really matter to anyone else. Some people might take interest. Some may support you in your quest. But at the end of the day, nobody cares, or will ever care about your dream as much as you. 3. Friends are relative to where you are in your life. Most friends only stay for a period of time — usually in reference to your current interest. But when you move on, or your priorities change, so too do the majority of your friends. 4. Your potential increases with age. As people get older, they tend to think that they can do less and less — when in reality, they should be able to do more and more, because they have had time to soak up more knowledge. Being great at something is a daily habit. You aren’t just “born” that way. 5. Spontaneity is the sister of creativity. If all you do is follow the exact same routine every day, you will never leave yourself open to moments of sudden discovery. Do you remember how spontaneous you were as a child? Anything could happen, at any moment! 6. You forget the value of “touch” later on. When was the last time you played in the rain? When was the last time you sat on a sidewalk and looked closely at the cracks, the rocks, the dirt, the one weed growing between the concrete and the grass nearby. Do that again. You will feel so connected to the playfulness of life. 7. Most people don’t do what they love. It’s true. The “masses” are not the ones who live the lives they dreamed of living. And the reason is because they didn’t fight hard enough. They didn’t make it happen for themselves. And the older you get, and the more you look around, the easier it becomes to believe that you’ll end up the same. Don’t fall for the trap. 8. Many stop reading after college. Ask anyone you know the last good book they read, and I’ll bet most of them respond with, “Wow, I haven’t read a book in a long time.” 9. People talk more than they listen. There is nothing more ridiculous to me than hearing two people talk “at” each other, neither one listening, but waiting for the other person to stop talking so they can start up again. 10. Creativity takes practice. It’s funny how much we as a society praise and value creativity, and yet seem to do as much as we can to prohibit and control creative expression unless it is in some way profitable. If you want to keep your creative muscle pumped and active, you have to practice it on your own. 11. “Success” is a relative term. As kids, we’re taught to “reach for success.” What does that really mean? Success to one person could mean the opposite for someone else. Define your own Success. 12. You can’t change your parents. A sad and difficult truth to face as you get older: You can’t change your parents. They are who they are. Whether they approve of what you do or not, at some point, no longer matters. Love them for bringing you into this world, and leave the rest at the door. 13. The only person you have to face in the morning is yourself. When you’re younger, it feels like you have to please the entire world. You don’t. Do what makes you happy, and create the life you want to live for yourself. You’ll see someone you truly love staring back at you every morning if you can do that. 14. Nothing feels as good as something you do from the heart. No amount of money or achievement or external validation will ever take the place of what you do out of pure love. Follow your heart, and the rest will follow. 15. Your potential is directly correlated to how well you know yourself. Those who know themselves and maximize their strengths are the ones who go where they want to go. Those who don’t know themselves, and avoid the hard work of looking inward, live life by default. They lack the ability to create for themselves their own future. 16. Everyone who doubts you will always come back around. That kid who used to bully you will come asking for a job. The girl who didn’t want to date you will call you back once she sees where you’re headed. It always happens that way. Just focus on you, stay true to what you believe in, and all the doubters will eventually come asking for help. 17. You are a reflection of the 5 people you spend the most time with. Nobody creates themselves, by themselves. We are all mirror images, sculpted through the reflections we see in other people. This isn’t a game you play by yourself. Work to be surrounded by those you wish to be like, and in time, you too will carry the very things you admire in them. 18. Beliefs are relative to what you pursue. Wherever you are in life, and based on who is around you, and based on your current aspirations, those are the things that shape your beliefs. Nobody explains, though, that “beliefs” then are not “fixed.” There is no “right and wrong.” It is all relative. Find what works for you. 19. Anything can be a vice. Be wary. Again, there is no “right” and “wrong” as you get older. A coping mechanism to one could be a way to relax on a Sunday to another. Just remain aware of your habits and how you spend your time, and what habits start to increase in frequency — and then question where they are coming from in you and why you feel compelled to repeat them. Never mistakes, always lessons. As I said, know yourself. 20. Your purpose is to be YOU. What is the meaning of life? To be you, all of you, always, in everything you do — whatever that means to you. You are your own creator. You are your own evolving masterpiece. Growing up is the realization that you are both the sculpture and the sculptor, the painter and the portrait. Paint yourself however you wish.
https://medium.com/@tsitsipasliveon/life-is-a-journey-of-twists-and-turns-peaks-and-valleys-mountains-to-climb-and-oceans-to-explore-2ad03d4329c
['Nadal Vs Tsitsipas Live Tv']
2020-11-19 20:07:27.082000+00:00
['Tennis', 'Technology', 'Sports', 'Social Media', 'Live Streaming']
652
San Diego Companies Raced to Confront COVID-19. Their Actions Reflect the Simple Truth About Leadership
San Diego companies helped the global community manage the COVID-19 crisis. Medical technology companies ramped up production, while leaders in biotechnology, communications, and computer technology opened supply chains and fast-tracked solutions for scientists and patients. Their quick action and definitive stance in the crisis proved one simple truth about successful organizations today: purpose-driven leadership matters. Information technology professionals aren’t surprised to hear that 90% of the nation’s innovation sector employment growth in the last 15 years was generated in five major coastal cities, including San Diego, CA. The weather isn’t the only factor that contributes to the region’s attractiveness for employers and job seekers. Many companies in San Diego provide their workers with an intangible, highly-desirable benefit: purpose-driven leadership. San Diego is rich with career-defining opportunities, from early startups to Fortune 100 companies. But there are a few industries that stand out among IT professionals, most notably, medical technology, biotechnology, semiconductors, and communications. Are these San Diego companies well-liked by employees for their focus on innovative solutions? Yes, but that’s only half of the story. More importantly, they provide a clear sense of purpose for their workforce, enabling them to support a meaningful mission. Many San Diego companies deserve recognition for the problems they solve, especially when the world isn’t watching. Nevertheless, the COVID-19 crisis intensified the need for companies to do more, make more, and facilitate change beyond the usual scope of work. At a time when shareholder relations and bottom-line revenues fall under scrutiny, what’s expected of people in highly visible leadership positions? The answer is always, lead with purpose. How San Diego Companies Responded to COVID-19 While each company addressed urgent needs differently, trends emerged that pointed to well-defined leadership and a clear mission across each organization. The following crisis responses were timely, thoughtful, and impactful across the world. Here’s a summary of how each company responded to COVID-19. Scripps Health In response to COVID-19, experts voiced the urgent need for testing. And in most regions across the country, it didn’t arrive fast enough. In San Diego, however, Scripps Health addressed the problem of slow testing with its molecular point-of-care test for detecting the novel coronavirus. The test used to detect a positive result in five minutes or a negative result in thirteen was one of the fastest on the market. The innovative diagnostics showcased Scripp’s commitment to providing superior patient care. President and CEO Chris Van Gorder had this to say about the company’s speed to market, “Testing is a critical part of the overall response to the coronavirus pandemic. Scripps moved that important tool to the front line of our fight against this devastating disease. The ability to deliver results in minutes at our hospitals for patients exhibiting possible symptoms of COVID-19 allows our physicians to make faster and better decisions about delivering the best care needed.” ResMed Whether you watched the news or not, you heard about the global need for ventilators to keep virus patients alive. Messages filled screens from TV to social media. Cue ResMed, a global leader in respiratory medicine. Although the company played a pioneering role in the fight against COVID-19, it was their sustained effort that was most impressive, an all-in stance that focused on people and purpose. The statements made by CEO Michael Farrell summarized the company’s unique position in the crisis — and display purpose-driven leadership at its finest. Imagine the pride employees felt after reading the following press release. “As a global leader in respiratory medicine, ResMed stands with the world in the face of the latest coronavirus disease COVID-19 and is ready to help mitigate its effects, helping people breathe while their immune system fights this virus. More than 7,500 ResMedians are working in over 140 countries for this purpose.” “ResMed is taking every measure possible worldwide to maximize the production of ventilators, masks, and other respiratory devices. We are looking to double or triple the output of ventilators, and scale-up ventilation mask production more than tenfold.” “I’d like to call out first-responder ResMedians in China’s Hubei province, the epicenter of the coronavirus outbreak, in particular one ResMed hero who, since early January, has donned a positive pressure hazmat suit, and helped set up thousands of people on ResMed ventilators and ResMed masks. There are also 100-plus ResMedians from Malaysia who in mid-March volunteered to keep working in our Singapore manufacturing plant when Malaysia closed its borders, relocating to live near our plant in Singapore, spending weeks away from their families, so they can continue to produce as many lifesaving ventilators and ventilation masks as possible.” You can also watch Michael Farrell’s interview with Jim Cramer from Mad Money. In retrospect, How would you like to associate your work with a company that formed a COVID-19 task force? Talk about identifying a sense of meaning from one’s daily work. ASML Some people say that corporations must give back to the community in a time of need. Many companies in San Diego maintain robust charitable programs, showing their commitment to corporate social responsibility (CSR). But what should be expected of an international organization during a global public health crisis? And what does purpose-driven leadership look like in this scenario? ASML, a leader in the semiconductor industry, with 24,900 people in 16 countries, demonstrated the highest level of global citizenship during the COVID-19 pandemic. Company leaders engaged every internal and external channel and provided support where it was needed, from supply chain coordination to programming and boots-on-the-ground volunteer work. ASML assisted national and regional Dutch crisis teams with the purchase of personal protective equipment (PPE), using its global procurement network. Additionally, the company offered engineering expertise to support companies and organizations to increase the manufacturing capabilities of medical equipment that was in short supply. Regional leaders took a more granular approach to impact local communities, where they deployed their workforce to feed frontline healthcare workers, distribute computers for students, and share IT training tools for kids learning from home. All levels of the company played a crucial role, including the design and implementation of thoughtful support programs. What does a detailed support initiative look like in this scenario? The team in San Diego, for example, joined forces with Frontline Foods to feed healthcare workers with meals from local restaurants impacted by the crisis. Viasat How a mission-driven company reacts to a crisis reflects the authenticity of their mission. And what the company does to fulfill its mission is crucial. For San Diego, CA, based global communications provider, Viasat, COVID-19 challenged the company to remain connected to their customers and partners. The pandemic effectively shut down cities, including business and residential areas, all of which impacted Viasat’s customers. “Business as usual” wasn’t a viable mindset under the circumstances. So, at a time when people lost their jobs and businesses either downsized or closed, Viasat offered immediate assistance. With their purpose in focus (to keep homes, airlines, businesses, and governments connected — no matter how difficult), the company pledged to keep Americans connected with the following aid: Not terminate service to any residential or small business customers because of their inability to pay their bills due to the disruptions caused by the coronavirus pandemic; Waive any late fees that any residential or small business customers incur because of their economic circumstances related to the coronavirus pandemic; and Open its Wi-Fi hotspots to any American who needs them. Purpose-driven leadership doesn’t hesitate to put the customer first. Viasat’s actions likely reduced the heavy burden many businesses faced. Employees were positioned to offer help on a human-to-human level. And in the long-run, as their mission states, the company kept people and communities connected at a critical time. Illumina COVID-19 impacted the world, some regions more seriously than others. Then we witnessed the actions business leaders took to help communities, from local authorities to political figures. What we couldn’t see, however, were the scientists working tirelessly to understand this novel Coronavirus. And that’s what they did in record time. Interestingly, researchers’ ability to sequence COVID-19 in Wuhan, China, was attributed to solutions created by Illumina, a leading sequencing and array technologies company in San Diego, CA. Few people outside of the company knew that Illumina’s innovative technology identified the virus. And fewer people knew how quickly their leadership team responded to the discovery. Within days, Illumina’s technology was available globally, an effort to support laboratories worldwide. Their mindset, supported by the broader scientific community, emphasized tracking the virus, its evolution, and epidemiology. It’s not every day that a biotechnology company disseminates a workflow for virus detection and sequencing with the world (with charitable intentions), so why now? Purpose-driven leadership. Sentiments shared by Illumina staff members, ‘the outbreak is an important reminder that the global community must strengthen national and international programs for early detection and response.’ Illumina’s mission to combat COVID-19 was transparent and multi-faceted. In addition to sharing innovation with the global scientific community, the company published a guide on Coronavirus detection. Imagine how much time was saved in the fight to slow the pandemic, and how rewarding it felt for the brilliant minds that developed the solutions. Why Purpose-Driven Leadership Matters During a global crisis, purpose-driven leadership influences and inspires people to be agile in a fast-changing world. Additionally, bold people are needed to address urgent needs and create long-term solutions. And often, those people are empowered by companies that have the knowledge and tools needed to create change. Scripps Health, ResMed, ASML, Viasat, and Illumina are five examples of San Diego companies that empowered their employees to be part of the crisis response. However, their response to the crisis reflects years of experience and cultural development across the enterprise. One thing is for sure: these companies integrated their people (workforce) and purpose (why they go to work every day) long before COVID-19 arrived. If you look closely, you’ll find that their mission statements embody everything we know about purpose-driven leadership; without them, we wouldn’t be so optimistic about tomorrow. Purpose Statement Examples Scripps Health Scripps strives to provide superior health services in a caring environment and to make a positive, measurable difference in the health of individuals in the communities we serve. ResMed Our team of 7,500 ResMedians is making a positive impact on millions of lives every day in more than 140 countries. We’re passionate about being accountable to the communities we’re in. Our work is guided by an environmental policy that promotes waste and pollution reduction, the use of reduced-impact materials and fair labor practices across the globe. ASML Unlocking the potential of people and society by pushing technology to new limits. Viasat We’ve made it our business to solve high-value, hard problems that create a real impact in the world and deliver the experiences that people want, need, and expect. Conclusion Increasing production three-fold, opening a new supply chain, donating PPE and computers, and sequencing a novel coronavirus across borders wasn’t the result of pure luck. Instead, dedicated employees and focused leaders accomplished these momentous achievements. Each one of the individuals involved worked with a clear goal: the desire to do good and help others. They knew what they were doing and why it was important. And that’s the value of purpose-driven leadership. The positive impact of their combined efforts is why it matters.
https://medium.com/live-your-life-on-purpose/san-diego-companies-raced-to-confront-covid-19-b71d1957e952
['The Carrera Agency Designing North']
2020-04-25 10:01:00.958000+00:00
['Technology News', 'Leadership', 'Covid 19', 'Career Advice', 'Workplace']
653
Surviving in a World of Exponential Change - Part 1
Human adaptability Humans are arguably the most adaptive species on the planet. We can see it by examining the vast regions we have been able to live in as well as the size of our population. We were able to adapt to massive climate shifts, fight off larger predators, and live in everything from the cold, vast arctic to blazingly hot and arid deserts. Much of this adaptability comes from our ability to develop tools and technology to master our environment. Some of the earliest recorded evidence of our use of tools date back to 3.4 million years ago with the discovery of cut-marked bones in Ethiopia. From the bones, it appears that sharp tools were used to cut the flesh from the bone and access the marrow inside. Our ability to harness fire dates back to approximately one million years ago. This was the first time that we leaped beyond other species within the hominids. This ability set us on a trajectory to have tools hardened with fire, warmth from the cold, and protection from predators, among others. Fast forward almost a million years and we just began to form large cities and civilizations. Sumer is the earliest known complex civilization, which formed approximately 5,500 years ago. From what we know, Sumerians understood geometry, astronomy, construction, and they developed agriculture and irrigation on a large scale. They also created societal systems such as legal, administrative, and trade. These skills were needed for the growing population of Sumer, which would be around one million. Think about the scale of time in which humanity went from first using a tool to developing the first civilizations. The range was from approximately 3.4 million to 5,500 years ago. Many generations had come and gone, passing down the knowledge and letting it slowly evolve. Technological advancement was something that took time, sometimes tens or hundreds or even thousands of generations. The Growth of Civilizations Once we began developing complex civilizations, we were able to support larger populations. If we were to trace the evolution of society and the rate of advancement, we will see that the curve of population growth started an exponential rise starting just after urban civilizations were formed around the Bronze Age. Record-keeping, monetary systems, hierarchical structures, metalworking, and more enabled our species to go from nomadic tribes that required expeditions across vast expanses to survive, to one where we could settle, long-term, in river valleys, coastal regions, and more. We were able to control our environment to grow populations and manage our survival on a grand scale. As we crossed into the Bronze Age, around 3,500 years ago, these localized civilizations expanded into empires. One of the first being the Akkadian Empire. This saw the merger of Sumerian and Akkadian cultures. This is also the period in which the Egyptian empire rose to prominence. New religions were formed and with it, the ability to manage larger societies was enabled. Even still, these changes and adaptations from a civilizational level happened over the courses of hundreds or thousands of years. Jumping ahead to the “modern” era, we reach the period in which steep increases in population, economic growth, and technological change were all starting to occur. As early as the 1500s, the scientific revolution began to rapidly spread. This led to the enlightenment period and set humanity on an exponential course that turned a world population from that of 500 million in the 1500s, to almost 8 billion in 2019. One of the largest contributing factors was the use of science and technology to further master our world. The reason the history of humanity and civilization needed to be talked about in this analysis of the exponential future, is to set up the time frames in which humans have traditionally needed to adapt to change. Exponential Explosion The exponential explosion is a period in which technology, as well as the world, changes much faster than anyone can track and manage. We are reaching a point of verticality, where an exponential curve becomes almost vertical. This is called a J-curve or “hockey stick” curve. When this happens, and arguably now, it is almost impossible for us to recognize the significance of the changes because we have no frame of reference, we are not used to the speed because the curve in the past wasn’t as steep. “The greatest shortcoming of the human race is our inability to understand the exponential function.” — Albert Bartlett (physics at the University of Colorado) As curve steepens much faster than before, we will be less able to adapt to it. Think back to our travel throughout the history of change from earlier. For a majority of the past 3.4 million years, our rate of change has been largely flat, taking tens or thousands of generations to make significant jumps (in comparison to now). Our biological evolution takes thousands of years to have a meaningful impact. We can adapt to our environment, but are we built for the coming rate of change? Have our brains changed that much since society was much more simpler? These are things I will explore in a later article, but for now, let’s focus on the phenomenon itself. Technology’s Example Growing up around technology and working in the fields of software engineering, design, and artificial intelligence has equipped me, in some way, for the exponential explosion. A software engineer has to constantly learn new frameworks and programming languages to keep up with new capabilities. Designers have to quickly adapt to new human-computer interactions and the increased power available to products. Those working with artificial intelligence recognize and appreciate the inevitability of self-improving models. There are so many aspects of technology that we could focus on, but I will try to keep the narrative concise with a few examples. Another aspect of this increase in the rate of change is the next generation of internet and cellular infrastructure. If you look at the last decade and what cloud infrastructure and virtualization has provided, it has enabled a new era of technological improvement and innovation. Before AWS, Google Cloud Platform, Azure, and others, businesses had to install and manage their infrastructure. Startups needed much more capital to build out their ideas. Now, for next to nothing, one can stand up a set of servers in the cloud and utilize world-class infrastructure. It has enabled a steeper growth curve. When you look at the next generation of cellular infrastructure by the likes of Ericsson, Qualcomm, Nokia, and Huawei, you notice that it is being designed to be modular, virtualized, and upgradable. This will enable new cellular technology to iterate much more quickly. When new advancements in this infrastructure appear, it will be much simpler and cheaper to upgrade. Enabling new economies to form and industries to grow. One of those industries is the IoT (Internet of Things) future, which will be booming by 2025 with over 25 billion connected devices. Let’s think back twenty years, the world was just learning about the internet and all of its economic potential. We were a bit too naive and the dot-com bubble exploded in the late 90s, but it was a temporary blip on the exponential growth curve. The internet changed the world in many ways, connecting half of the world’s population by 2017 and creating entire new sectors of the economy. The smartphone did something similar, creating new markets, app stores, and connectivity. Over 5 billion people have mobile devices, with 60% of those devices being smartphones. In the late 90s, it was hard to see the path ahead. What would the internet become, how would it change the world? We are at a similar precipice now. The only difference is that we are on a steeper part of the curve. I’ve raised several questions throughout this introduction that I will try to answer in the next parts of this series. For now, I just leave you with this — do you think you are ready for super-exponential change?
https://medium.com/beyond-the-river/surviving-in-a-world-of-exponential-change-part-1-7be1c4304c6f
['Drunk Plato']
2019-10-06 17:23:18.496000+00:00
['Future', 'Articles', 'Humanity', 'Technology', 'Change']
654
H+ Weekly — Issue #320. OpenAI disbands its robotics team…
Artificial Intelligence Towards the end of deep learning and the beginning of AGI Javier Ideami explains in this article how the volume of neurons, cortical columns in our neocortex and movement help us better understand how our brains work and therefore help us build more flexible and resilient artificial neural networks. Voice clone of Anthony Bourdain prompts synthetic media ethics questions The creators of “Roadrunner,” a documentary about the deceased celebrity chef Anthony Bourdain, trained an AI to generate a copy of Bourdain’s voice and use it to read one of his emails. This has sparked some ethical questions about synthetic media which this interview sheds light on. Dynascore’s AI Music Engine Writes Tracks to Match Your Videos In need of some cool music for your latest YouTube video? There is an AI now that will do just that. AI-Generated Language Is Beginning to Pollute Scientific Literature Researchers from France and Russia have published a study indicating that the use of AI-driven probabilistic text generators such as GPT-3 are introducing ‘tortured language’ into scientific journals. Some of the flagged papers seem to use GPT-3-like algorithms to bolster the English skills of the papers’ authors but some seem to be completely generated by AI. Robotics OpenAI disbands its robotics research team Robotics is hard and OpenAI is another company to learn that lesson. OpenAI dabbled into robotics with a robotic arm solving Rubik’s Cube but did not decide to pursue the research further. Instead, the company decided to focus on other, more profitable, projects, like GPT-3 and its applications. Mini Pupper: Boston Dynamics Spot in the Palm Mini Pupper is a project for all who want to play with Boston Dynamics Spot but don’t have spare $75k to get one. Designed at Stamford, Mini Putter aims to be a “robot dog for education at an acceptable price”. And it is open source so if you want you can go on and build one yourself. Getting dressed with help from robots Researchers from MIT trained a robotic arm to safely put a jacket on a human. The robot had to plan its motion with a human to make sure no one gets hurt. Researchers hope their work could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility. One might think that teaching robots how to dance is a silly idea not worth pursuing. This article, however, argues that teaching robots how to dance and the emerging field of choreorobotics can help robots interact safely with and around humans. Biotechnology DeepMind puts the entire human proteome online, as folded by AlphaFold The AlphaFold Protein Structure Database is a collaboration between DeepMind, the European Bioinformatics Institute and others, and consists of hundreds of thousands of protein sequences with their structures predicted by AlphaFold — and the plan is to add millions more to create a “protein almanac of the world.” How Designer DNA Is Changing Medicine Gene therapies based on the latest gene-editing techniques are revolutionising medicine. This article mainly focuses on applying those techniques to sickle cell disease but the same methods can be used to cure other genetic diseases.
https://medium.com/h-weekly/h-weekly-issue-320-2e067fa5ede1
['Conrad Gray']
2021-07-25 10:31:05.177000+00:00
['Artificial Intelligence', 'Technews', 'Robotics', 'Technology News', 'Biotech']
655
What the Apple Newton taught us about UX 27 years ago
Application View The proto view was the building block of the Newton’s base application. The Newton used a series of screens to display an application’s UI, just like we see today on mobile phones and tablets. The default view would have a title at the top and nav bar at the bottom. In this example, there is a small pop up clock button on the left and a close button on the right. Since the Newton had a dedicated navigation area at the bottom of the screen, most navigation UI was towards the bottom of the application’s view. Headers You can add a header to the application. This usually consists of an icon on the left, some text, and a horizontal line that goes across the rest of the screen. This helped display the title, a custom title, and could even have additional icon buttons for contextual actions. For all of the forward-thinking that went into the Newton’s UI, there were no rules around how an application could look or feel. So most developers implemented custom looking layouts, which meant that things we now take for granted like a consistent location for the time, date, and notifications did not exist. Folder Tab Applications could also make use of a folder tab that went along the top of the view. While this is commonplace now thanks to web browsers, back in the early ’90s, this was a new concept. The newton actually implemented this as a drop-down menu instead of the way we know of tabs today. To make use of the space, developers could also include the clock, which wasn’t a fixed UI element along the top of the screen like we have today. In addition to the folder tab, developers had access to sub-tabs for jumping between data views in an application. Here is an AZ tab navigation example typical on the Newton. Since the Newton was limited in how many pixels it could display on the screen with its 320 x 240 pixel resolution, designers had to cram in as much as they could. Here you can see that the tabs automatically increment two letters at a time since it wouldn’t be possible to display 26 individual tabs at a time. Some letters actually touch the border of the tab because there simply isn’t enough space to have consistent padding. This wouldn’t acceptable with today’s UI design. Most likely, tabs would be missing to allow for scrolling left and right, so each letter would have the same spacing and padding. Manu Bar As we saw earlier with the default application view window, the bottom of the screen was reserved for an application menu bar. There was always an information button on the left to display help pop-ups for users. Next was usually the font and keyboard input options. Then it was up to the developer what they wanted to display. Since there was no drop-down menu like you would have on a desktop computer, most actions were buttons in this menu bar. When there wasn’t enough room to display all of the actions in the static menu bar, developers used a floating bar with additional options. Finally, all applications needed a way of closing, so the exit button was always the last opinion along the far left of the menu bar. A lot of thought had to go into minimizing the actions of an application depending on what kind of buttons you used in the menu bar and how much space was available. Buttons Buttons are the building block of any UI, so it’s not surprising that the Newton supported several types of buttons from simple text to borderless icons. One thing that was rare on the Newton was compound buttons that combined an icon and text. This was probably due to the limited space on the screen, especially when it came to action buttons in menus. Input Fields I could write an entire article on the Newton’s text input components. The most basic one was just a simple input field with a label no the left and a dotted line indicating it could be edited. Because of the Newton’s unique handwriting recognition, there were some additional options that became available when access text input fields allowing you to toggle between script or the virtual keyboard. Radio Buttons The Newton had radio buttons developers could also include in their user choice options. These radio buttons were minimalistic, using a dotted outline for an unselected option and filled circle for the current selection. Today, the dotted line circles would read as disable or not selectable, and I wonder why they didn’t go with standard circle outlines? Perhaps with all of the other think outlines, it would have been too difficult to stand out on a crowded screen. Checkboxes followed a similar design with dotted rectangle outlines for unselected options and a small checkmark on top for selected. Pickers Pickers are usually a combination of a button and some kind of scrolling list. The Newton had support for pickers as well, and these often took on the appearance of what we would consider contextual pop-up menus that appear when right-clicking on today’s computers. These pickers were always triggered by a button and could contain icons, text, checkboxes, and even horizontal dividers. Based on the location on the screen, the picker would appear below, next to, or above the button. The Mac was well known for its use of icons throughout the OS, so the Newton leaned heavily on clear iconography wherever possible. I like this example of a person, company, and group used in conjunction with the text to add more personality to the picker. Finally, additional UI elements could be combined with pickers to create more complex interactions. In this example, there is a title, a list of text, sorting tabs on the right, scrollers below, and a close button. Today this would be too much UI for a single picker, especially since the close button is redundant. Now we expect to close a pop-up picker by clicking outside of it or making a selection. Modals Modal and pop-up views were a big part of the Newton’s UI. These could be simple display screens telling you about the current application or show some kind of help message. Modals could also promo the user for action or modify settings. While a modal could contain most of the standard UI components, a typical application was only able to display one at a time. Finally, the border, title, and close options were consistent across the OS. Scrollers It wasn’t particle to expect the developer to be able to display all of the content a user needed on a single screen at a time. So the Newton had a scroller that could be tied to lists for paging up and down. Scrollers typically sit to the right of the UI component they control. Unlike desktops operating systems, these scrollers didn’t extend to the hight if the viewport. While this may have been efficient in small lists, using these scrollers for lists taking up the entire screen was not ideal. There was no “smooth scrolling” on the Newton. Lists refreshed slowly, line by line when you scrolled up or down, making this a chore in most applications. Sliders and Progress Bars Visually, sliders and progress bars were identical. The only difference being the handle on the slider. One downside of the slider, and most of the Newton’s UI, was the size of interactive elements. In the case of the slider’s handle, it was difficult to click on even with the stylist. Components requiring dragging tend to be a bit frustrating to use, especially with the Newton’s slow redraw rate. For progress bars, used in the battery gauge example, the slider’s handle was removed and managed programmatically. Dividers, Titles, and Status Bars While I touched a bit on some of the standard title bars an application could display across the top of the screen, there some additional UI components a developer had access to for breaking up the visual flow of a screen. Even back then Apple knew the importance of grouping UI together in order to create a readable visual hierarchy. All of these elements shared a similar design aesthetic stating with the divider. The Newton UI used a lot of heavy horizontal lines to separate elements on the screen. The divider was unique in that it allows text to in the middle of it, unlike the traditional horizontal rule we use in web design. The same design aesthetic applied to the status bar, which we saw in the beginning. Unlike the divider which breaks around the text, the status bar’s icons sit on top of the line. This was a strange design choice and made these status bars feel a bit cluttered when a lot of options or icon buttons were in them. Finally, there was an individual title that supported a small icon, text, and a line below the two. I wasn’t able to tell if you had any control over changing the thickness of these divider lines. It appears that each component had a set line thickness, and it was up to the developer to place them according to what works best for the visual weight on the screen. Fonts The Newton had an impressive collection of built-in fonts. Users were even able to adjust the size of the fonts as well. In total, the Newton had around 12 or so fonts that were common on the Mac OS of the time. Also, developers had access to similar font display options you’d expect today. While some of these fonts were better than others, text elements in the core UI components all shared a similar font and set size based on the default design of the component. As far as I know, these could not be changed, and there was no way to increase the system font size for people without 20/20 eyesight. Overall, the UI fonts on the Newton were tiny to display as much information as possible on a single screen. Everything Else There were a lot of additional UI components I didn’t go into detail above but wanted to call out quickly. There were different types of expandable and collapsible rolled lists as they called them: Outside of accordion lists, there were scrollable images that were masked off inside of a viewport: For tabular data and help with laying out content in a view, developers had access to tables: And finally, there were several ways to visualize the date and time from simple calendars: Along with analog and digital clocks: You can start to see the early shift toward skeuomorphic design in their UI. The digital clock example is reminiscent of an old alarm clock radio that used number cards attached to a wheel to display the time. You can see this reference in the dotted line across the middle of the digits.
https://uxdesign.cc/what-the-apple-newton-taught-us-about-ux-27-years-ago-427c6be66a59
['Jesse Freeman']
2020-07-11 11:16:17.815000+00:00
['User Experience', 'Design', 'Education', 'Technology', 'Startup']
656
AngularJs Vs. ReactJs: Which Is The Best Choice in 2021?
Whether to choose React JS or Angular Js, may leave you in a serious quandary. From the performance, features, to platform compatibility, there are various parameters that you need to compare to make a valuable decision. Each framework comes with its own benefits and limitations. In order to help, here’s a complete comparison of these two top JavaScript frameworks- AngularJs or ReactJs to help you get started with the next web app development project with the best JS framework. Key Highlights of the Content Key Statistics Indication Why Businesses Should Invest In Web Apps in 2021? Introduction: Why To Choose JavaScript Frameworks? Detailed Comparison Between AngularJs. Vs React Js. AngularJs. Vs React Js: Who is the Winner? Conclusion Let’s begin! Key Statistics Indicating Why Businesses Should Invest In Web Apps in 2021 In the past, having a business website was enough to reach the broader market. Today, you must be witnessing a massive increase in the variety and types of business apps being downloaded, with more consumers turning to mobile apps to interact with brands. Gone are those days when having a business site was enough to run your brand. These are not empty words. You can take a look at the statistics that helps you understand why it has become essential to develop web applications: According to the survey, 90% of the mobile time is spent on mobile applications and 10% on the rest of the internet, including websites. A mobile app user spends 201.8 minutes per month on shopping apps, whereas a website user will spend 10.9 minutes/per month. Users view 4.2x more products in a mobile app than the website. Therefore, 3x higher conversion rates are expected from the mobile apps and 1.5x via desktop. It is expected that mobile apps are expected to make $700 billion in annual sales in 2020. To sum up these facts and stats, it is expected that website usage will shrink further as more and more e-commerce businesses move to web apps to reach out to a wider market size. But before you get straight into the process of hiring a mobile app development company to create a web application, make sure you choose the best technologies and frameworks. And JavaScript and its frameworks are the titans in the field of website or web app development. Introduction: Why Should you Consider JavaScript Frameworks for Web App Development? Gone are those days when web app performance was the most significant issue, and users have to download mobile apps for a better experience. All thanks to JavaScript’s framework that has improved the performance and lifted the demand for web apps in the market. For a long time, many technologies and programming languages have come and gone, but nothing could replace JavaScript. According to Statista, JavaScript still holds the top position as a programming language and a choice of 67% of developers. On one side, developers can’t wait for the exciting new language features and libraries to transform your development experience. But at the same time, you crave stability and simplicity, so you don’t have to drain half of your time browsing issues and overflows of the technologies. Instead of choosing a new shiny language, 67% of developers prefer to go for old reliable JavaScript for the website or web app development solutions. Being one of the lightweight and most commonly used scripting languages, JavaScript is still primarily used to create dynamic web content. Since JavaScript’s frameworks are written in JavaScript, therefore, programmers can manipulate the functions and use them for their convenience. Basically, frameworks are more commonly used for the designing of the website, but JavaScript frameworks are a type of tool that make it working with JavaScript easier and smoother. Also, the primary reason behind its popularity is its versatility and can be used for both frontend and backend development and testing and web applications. But with so many choices for the JavaScript frameworks from backend, frontend or even testing, it’s challenging to choose to make the right choice of framework for your requirement. When choosing the best JS frameworks, React.js or Angular.js are the primary choice of developers and creating a tough battle between them. Now the real game begins for the developers. Being the most competitive JS frameworks, choosing among these two frameworks, is tough. So keep reading to know the winner of this battle. Detailed Comparison Between AngularJs. Vs ReactJs. Since the success of the app is majorly dependent upon the choice of the framework that you choose for the app development, therefore, we carried out a detailed comparison between Angular.js or React.js. Let’s start with a quick introduction and take a sneak peek at these frameworks. 1. React.Js Vs. Angular.Js: Know the Baseline of Js Frameworks Conclusion: Being an open-source JavaScript library, React.js is a prime choice of developers in 2019 and 71.7% of users prefer using it again. Whereas Angular.js is an MVC framework with the complete feature set and 21.9% users prefer to use it also in the future. Angular.Js was launched by Google in 2009 as an open-source framework that allows developers to craft unique solutions especially with the development of single-page applications. Angular.s development works on the features like routing, data binding, dependency injection, directives, deep linking and more and ensures finest app security, therefore, 6500+ companies are using Angular.Js for their app development. To name a few, popular ones are Google, Amazon, Snapchat, Hennge, Udemy, Lyft and more. React.Js, rather than considering it as a framework, is better known as an open-source JavaScript library that was developed by Facebook in 2013. The simple and specific purpose of launching this JS framework is to resolve issues in rendering the large datasets and ensuring excellent app performance. React.js is more dependent on “View” in the MVC architecture and majorly used to operate dynamic User Interface of the web pages with high incoming traffic. Despite being a new framework, Facebook, Netflix, Instagram, Discovery has used this JS framework for app development. 2. Angular.Js Vs React.Js: Popularity Conclusion: According to the NPM reports, React.js has hit 10,245,189 weekly downloads. Whereas Angular.js has surpassed 638,437 weekly downloads. Despite being young, React.js has achieved 163K stars and 32.7k Fork on Github and clearly surpassing Angular.js as it revolves around 59.5k Stars and 1.578K Contributors. In the very short span of time Reactjs has outshined as the most loveable and demanding Js frameworks. With its increasing popularity among mobile app developers, it is safe enough to say that it will dominate in the future. However, it doesn’t mean that Angular.js is not in use. Angular.js provides great libraries along with the robust template building solutions that expedite the development process. For these reasons, Angular.js is still held its position in the JavaScript frameworks and been using for the development. 3. AngularJs Vs ReactJs: Learning Curve Conclusion: Since React.Js is introduced with the only aim aligned to simplify the complex app development process and provide a wide choice of libraries so developers don’t need to spend tons of hours on re-learning the programming language. In comparison, Angular.js has a steeper learning curve. Being based on JavaScript programming language, it is easier for the React.js app developers to get straight into the development process. All thanks to its simple design, use of JSX, highly detailed documentation and library that makes the entire app development process more accessible to even novice and expert developers. But React requires constant learning due to frequent updates. In contrast, the learning curve of Angular.js is considered to be much steeper than React.js. Since Angular.js is a complex and verbose framework that may offer you multiple options to solve a single problem, it has intricate component management that requires repetitive actions. 4. AngularJs Vs ReactJs: App Size and Performance Conclusion: To select the best Js framework you need to consider three metrics; performance, size and lines of coding it includes in the app development process. And due to Virtual DOM, ReactJs apps perform faster than Angular.js. Before you hire a mobile app developer, you need to select the framework based on: How long does your app take to show the content and become usable? How big will your app be? How many lines of code did the developer need to create? Though app size and performance are the two important aspects that directly impact the quality of your and impose a direct impact on the app load and response time. The choice of framework for web app development will directly impact customer satisfaction. All thanks to the Virtual DOM feature of React.Js that is efficient enough in handling frequent UI updates of the app and ensures fast processing of the app. React.js also eliminates the performance-related issues during UI rendering. On the other hand, Angular.js is always famous for its low performance when you deal with complex and dynamic applications. Angular.js is the best choice of framework for heavy enterprise applications whereas React.js is prefered for light-weight applications. Since the framework size of Angular.js and React.js is varied in between 500KB and 100KB respectively, therefore, React.js is the prefered options for the app development company. 5. AngularJs Vs ReactJs: Community Support Conclusion: For any app developer to work efficiently on a project, it is essential to have vast community support of the framework to help you make out the best from the framework. As Angular.js is maintained by Google and React.js has already come out from the Facebook community, both receive great support from the community. ReactJs is introduced and maintained by Facebook; therefore, React.js is undeniably backed by a huge active community. If you choose to hire a software developer, then engineers can cope with the regular updates more efficiently without breaking the application. Popular websites like Netflix, Airbnb, PayPal, Uber, Asana have used ReactJS in their application. On the other hand, Angular.js is actively supported by Google, which keeps its ecosystem growing and expanding to the next level. AngularJs. Vs. React Js: Who is the Winner? While comparing the features and functionalities of these two frameworks, it is worth to say that they the are titans in the field of web app development. Declaring anyone as a winner over others may sound unfair for the developers as Angular.js is an MVC framework with rich features and React.js is a complete open-source of JavaScript library. So to choose your own winner, let’s take a quick over with this infographic comparison: EndNote AngularJs and ReactJs, both are high-performing Javascript frameworks that provide extensive support for web app development. With this comparison, our simple aim is to help you select the best suitable framework for your web app development project. As every business has its own development needs, therefore, there is no standalone framework that suits all your various business needs. Hopefully, this article has helped you gain a better understanding as to which framework best suits your needs and how they help you build a powerful web application. Still, if you have any doubts about which framework will enable you to build a top-notch application, then we recommend you hire a software development company that deeply analyses your requirements and suggests you best solutions.
https://codeburst.io/angularjs-vs-nodejs-vs-reactjs-which-is-the-best-choice-in-2021-129b54f9768e
['Sophia Martin']
2021-02-17 10:36:16.404000+00:00
['Web Development', 'Mobile App Development', 'Startup', 'Web App Development', 'Technology']
657
Clean Architecture, the right way
What is so Clean about Clean Architecture? In short, you get the following benefits from using Clean Architecture: Database Agnostic : Your core business logic does not care if you are using Postgres, MongoDB, or Neo4J for that matter. : Your core business logic does not care if you are using Postgres, MongoDB, or Neo4J for that matter. Client Interface Agnostic: The core business logic does not care if you are using a CLI, a REST API, or even gRPC. The core business logic does not care if you are using a CLI, a REST API, or even gRPC. Framework Agnostic: Using vanilla nodeJS, express, fastify? Your core business logic does not care about that either. Now if you want to read more about how clean architecture works, you can read the fantastic blog, The Clean Architecture, by Uncle Bob. For now, lets jump to the implementation. To follow along, view the repository here. Clean-Architecture-Sample ├── api │ ├── handler │ │ ├── admin.go │ │ └── user.go │ ├── main.go │ ├── middleware │ │ ├── auth.go │ │ └── cors.go │ └── views │ └── errors.go ├── bin │ └── main ├── config.json ├── docker-compose.yml ├── go.mod ├── go.sum ├── Makefile ├── pkg │ ├── admin │ │ ├── entity.go │ │ ├── postgres.go │ │ ├── repository.go │ │ └── service.go │ ├── errors.go │ └── user │ ├── entity.go │ ├── postgres.go │ ├── repository.go │ └── service.go ├── README.md Entities Entities are the core business objects that can be realized by functions. In MVC terms, they are the model layer of the clean architecture. All entities and services are enclosed in a directory called pkg . This is actually what we want to abstract away from the rest of the application. If you take a look at entity.go for user, it looks like this: pkg/user/entity.go Entities are used in the Repository interface, which can be implemented for any database. In this case we have implemented it for Postgre, in postgres.go. Since repositories can be realized for any database, therefore they are independent of all of their implementation details. pkg/user/repository.go Services Services include interfaces for higher level business logic oriented functions. For example, FindByID, might be a repository function, but login or signup are service functions. Services are a layer of abstraction over repositories by the fact that they do not interact with the database, rather they interact with the repository interface. pkg/user/service.go Services are implemented at the user interface level. Interface Adapters Each user interface has it’s separate directory. In our case, since we have an API as an interface, we have a directory called api. Now since each user-interface listens to requests differently, interface adapters have their own main.go files, which are tasked with the following: Creating Repositories Wrapping Repositories inside Services Wrap Services inside Handlers Here, Handlers are simply user-interface level implementation of the Request-Response model. Each service has its own Handler. See user.go
https://medium.com/gdg-vit/clean-architecture-the-right-way-d83b81ecac6
['Angad Sharma']
2020-01-12 15:59:59.237000+00:00
['Software Architecture', 'Go', 'Development', 'Clean Architecture', 'Technology']
658
Manipulating File Paths with Python
Photo by Viktor Talashuk on Unsplash Python is a convenient language that’s often used for scripting, data science, and web development. In this article, we’ll look at how to read and write files with Python. Files and File Paths A file has a filename to reference the file. It also has a path to locate the file’s location. The path consists of the folder, they can be nested and they form the path. Backslash on Windows and Forward Slash on macOS and Linux In Windows, the path consists of backslashes. In many other operating systems like macOS and Linux, the path consists of forward slashes. Python’s standard pathlib library knows the difference and can sort them out accordingly. Therefore, we should use it to construct paths so that our program will run everywhere. For instance, we can import pathlib as follows and create a Path object as follows: from pathlib import Path path = Path('foo', 'bar', 'foo.txt') After running the code, path should be a Path object like the following if we’re running the program above on Linux or macOS: PosixPath('foo/bar/foo.txt') If we’re running the code above on Windows, we’ll get a WindowsPath object instead of a PosixPath object. Using the / Operator to Join Paths We can use the / operator to join paths. For instance, we can rewrite the path we had into the following code: from pathlib import Path path = Path('foo')/'bar'/'foo.txt' Then we get the same result as before. This will also work on Windows, macOS, and Linux since Python will sort out the path accordingly. What we shouldn’t use is the string’s join method because the path separator is different between Windows and other operating systems. For instance: path = '/'.join(['foo', 'bar', 'foo.txt']) isn’t going to work on Windows since the path has forward slash. The Current Working Directory We can get the current working directory (CWD), which is the directory the program is running on. We can change the CWD with the os.chdir function and get the current CWD with the Path.cwd function. For instance, we can write: from pathlib import Path import os print(Path.cwd()) os.chdir(Path('foo')/'bar') print(Path.cwd()) Then we get: /home/runner/AgonizingBasicSpecialist /home/runner/AgonizingBasicSpecialist/foo/bar as the output. As we can see, chdir changed the current working directory, so that we can use manipulate files in directories other than the ones that the program is running in. The Home Directory The home directory is the root directory of the profile folder of the user’s user account. For instance, we can write the following: from pathlib import Path path = Path.home() Then the value of path is something like PosixPath(‘/home/runner’) . Absolute vs. Relative Paths An absolute path is a path that always begins with the root folder. A relative is a path that’s relative to the program’s current working directory. For example, on Windows, C:\Windows is an absolute path. A relative path is something like .\foo\bar . It starts with a dot and foo is inside the current working directory. Creating New Folders Using the os.makedirs() Function We can make a new folder with the os.makedirs function. For instance, we can write: from pathlib import Path Path(Path.cwd()/'foo').mkdir() Then we make a foo directory inside our current working directory. Photo by Lili Popper on Unsplash Handling Absolute and Relative Paths We can check if a path is an absolute path with the is_absolute method. For instance, we can write: from pathlib import Path is_absolute = Path.cwd().is_absolute() Then we should see is_absolute being True since Path.cwd() returns an absolute path. We can call os.path.abspath to returns a string with of the absolute path of the path argument that we pass in. For instance, given that we have the directory foo in the current working directory, we can write: from pathlib import Path import os path = os.path.abspath(Path('./foo')) to get the absolute path of the foo folder. We then should get something like: '/home/runner/AgonizingBasicSpecialist/foo' as the value of path . os.path.isabs(path) is a method that returns True is a path that is absolute. The os.path.relpath(path, start) method will return a string of the relative path from the start path to path . If start isn’t provided, then the current working directory is used as the start path. For instance, if we have the folder /foo/bar in our home directory, then we can get the path of ./foo/bar relative to the home directory by writing: from pathlib import Path import os path = os.path.relpath(Path.home(), Path('./foo')/'bar') Then the path has the value ‘../../..’ . Conclusion We can use the path and os modules to construct and manipulate paths. Also, we can also use the / with Path objects to create a path that works with all operating systems. We can also path in paths to the Path function to construct paths. Python also has methods to check for relative and absolute paths and the os module can construct relative paths from 2 absolute paths. A note from Python In Plain English We are always interested in helping to promote quality content. If you have an article that you would like to submit to any of our publications, send us an email at [email protected] with your Medium username and we will get you added as a writer.
https://medium.com/python-in-plain-english/manipulating-file-paths-with-python-72a76952b832
['John Au-Yeung']
2020-05-04 14:50:44.216000+00:00
['Programming', 'Technology', 'Python', 'Software Development', 'Software Engineering']
659
Display Rich Text In The Console Using Python
Are you using the terminal more than GUI-based operating systems? Or, do you usually develop command-line interface programs using Python? Recently, I found an amazing Python library called “Rich” on GitHub, which already has 15.2k stars by the time I’m writing this article. It can not only display text in colour and styles in the terminal but also emoji, tables, progress bars, markdown and even syntax highlighted code. If you have read one of my previous articles below: The “Rich” library can almost do everything that I have introduced in that article, as well as some other awesome stuff. In this article, I’ll introduce this library with examples to show its capabilities. Introduction & Installation Image source: https://github.com/willmcgugan/rich (Official GitHub README.md) The image above shows the major features that the library can do. You can find the project on GitHub: The library requires Python 3.6.1 or above to be functional. Installing the library is quite simple because we can use pip . pip install rich Rich Printing Let’s first look at how it can print. Tags and Emoji It supports formatting tags and emoji.
https://towardsdatascience.com/get-rich-using-python-af66176ece8f
['Christopher Tao']
2020-11-30 16:09:28.681000+00:00
['Technology', 'Software Engineering', 'Artificial Intelligence', 'Software Development', 'Programming']
660
Develop a Peer to Peer Blockchain in Python
Let us go through the process to develop a peer to peer blockchain in python. Our development environment will be a Virtual Private Server running in Ubuntu. You can purchase a droplet from Digital Ocean for $5, cancel it when you are done. When you have purchased the VPS — you need to set it up with the following: Initial server set-up — you can follow our previous Tutorial on initial server set-up and installing Python. https://skolo-online.medium.com/how-to-create-python-flask-web-application-on-ubuntu-20-04-cf2edc90ee36 Install Git — on your personal machine and VPS. https://git-scm.com/book/en/v2/Getting-Started-Installing-Git Install a personal text editor like Atom Set-up VPS remote repository This is a crucial step, we will be writing the code on out personal computers but pushing it to the VPS via ssh. The code will be run on the VPS, which is already now set up with Python virtual environment. Follow the steps below to create the repo and link it to personal computer: In the VPS (Virtual Server) — Create two directories, call one blockchain and the other repo. mkdir blockchain repo Then cd in to the blockchain folder and create empty git repo cd blockchain git init cd .. Then cd in to repo and create bare git repo cd repo git init — bare Add the following file in to the hooks/post-receive sudo nano hooks/post-receive #!/bin/bash git — work-tree=/path/to/blockchain/folder/— git dir=/path/to/repo/folder/repo checkout -f sudo chmod +x hooks/post-receive Then you need to run the following — in your personal computer, ensure you have ssh sign in to root of the VPS set-up and you have the password or ssh keys in your computer This code must be run in the file directory where you are building the code from — on your personal computer. git init git add . git commit -m “initial commit — or anything you want to call the commit” git remote add origin root@ 41.x x.xx.xx:/path/to/the/repo/folder/repo Now we have created a git repo, committed changes and added remote repo — we need to push the changed across. git push origin master Every time you make a change — you will need to push changes before running the code. Peer to Peer Blockchain in Python code: The files will be nested in a flask application for a reason, for now just work with the standard Flask set-up, plus some additional classes: | |_____ static (empty for now, we will use bootsrap CDN) |_____ templates | |______ index.html | |______ 404.html | |__________________ app.py |__________________ config.py |__________________ blockchain.py |__________________ account.py |__________________ peerServer.py Blockchain.py file Makes more sense to start with this file. In here we will write code that deals with the blockchain itself. A blockchain is a series of blocks — each block contains the following: (1) index, (2) hash of previous block, (3) nonce, (4) the data, (5) timestamp and (6) the hash of this block. The index represents the number the block has in the chain of blocks. The previous hash — connects this block to he previous one, by recording its hash — which is then used to calculate the hash of this block. A nonce is a random number calculated when we get the hash of the current block. The data — which can be anything, even a legally binding contract. The hash is the data signature of everything in the block. Genesis block — is the first block in the chain, the values are easier to estimate. In this file we will include functions for: Adding transactions to the blockchain Mining — proof of work Verifying transaction signatures Adding transactions to the pool: In our blockchain transactions will come from two different sources: the front-end application (hence we needed Flask) and broadcasts from other nodes. When we receive these transactions, we will add then to our own transaction pool. Our transaction pool is currently an SQL database — hosted on the same server. Proof of work — mining: The type of blockchain created here is — Proof of Work. This means, in order to add new blocks to the chain, a computation must be carried out. We will be calculating the nonce value, a random number required to give us a data signature that meets certain requirements. Difficulty is introduced to manage how fast blocks can be created, to make it a process that takes time. The computer once it has the data that must fit in the block — must now hash that data as many times as required to find that hash that meets the requirements set. The harder we make the requirements, the longer it will take this computer to find the correct nonce. Therefore the difficulty is determined by setting of the requirements, which can be anything. In our blockchain — we want all the hashes to have 00009 at the beginning. There is some difficulty — but not too much, we are a private blockchain. Verifying transaction signatures: This function is written to ensure the signatures in the pool are valid before adding the data to the blockchain to be mined. Our signatures are created using Eliptic Curve Cryptography methods — from the private key of the account and the data itself. No two types of data will have the same signature and if we have the signature and the data, we can use the public key of the account to verify the validity of the signature. We do this to make sure the person who sent the data is the owner of the account associated with the data. Client Facing Flask Application We created a Flask application to enable us to communicate with a peer in the blockchain — the endpoints include: Home page for submitting applications End-point to view the transactions currently in the transaction pool End-point to view the chain and its data Point to connect a new node to the network — a new node would need to send a post request to this end-point, to be able to be added to the network of nodes. MYSQL Database The database is required to store data for the node. Each node connected will have a separate database and manage its own database. The database is created outside of the flask application, with the tables etc. The data in the database can be added and removed via the flask application. Other files We have account.py file and peer2peer files as well, all the code is available on github. It would take too long to go through all these files — review on Github and/or check-out the full video tutorial A 8 hour video tutorial can be found here: Peer to Peer Blockchain in Python Create a peer to peer blockchain using python.
https://medium.com/@skolo-online/develop-a-peer-to-peer-blockchain-in-python-f7c9bdbefcda
['Skolo Online Learning']
2020-12-11 13:45:02.836000+00:00
['Python Programming', 'Blockchain', 'Blockchain Technology', 'Python Blockchain', 'Blockchain Development']
661
Distributed Systems, Distributed Teams
Telecommuting is becoming more of a common practice, and understandably so. A business can expand their talent pool quite a bit when searching beyond their physical location, and the investment in a remote worker can be less than that of an office worker¹. Remote employees also see a number of positives from not commuting, spending that time in other ways. Many teams even allow office employees to work-from-home 1–2 days a week. This is particularly applicable in software engineering. Many employers require engineers that know specific languages, and that search is made easier with telecommuting employees. Thanks to powerful laptops and VPNs, software engineers can do most of their work in any location. Despite the benefits, many telecommuting relationships do not work out well. Some employers find it too difficult to have proper oversight and communication with their remote team members, while employees may find the experience to be isolating. One bad experience can turn either an employer or employee off to the idea entirely. I don’t claim to be an expert on the mater. Still, I have successfully telecommuted for a few years now, and I would like to offer the little insight that I have. Quick To Call Electronic communication is a great way to have quick conversations with a searchable history. However, once the conversation gets beyond a few quick messages, it can become a burden. Talking is simply a faster form of communication, and comes with the wonders of non-verbal communication. It is a good idea to call your peer(s) for longer conversations. Show Your Face (And Screen) Yes, it is tempting to turn off the camera and wear PJs to work. That comes at a high cost: non-verbal communication. This is invaluable to proper communication, and I’m sure many of your peers enjoy seeing you each day! Sharing your screen is also a big advantage when talking with your peers. Whether you need to work through some code or hash out a bug, sharing what you’re doing allows others to provide faster input. Document Important Conversations Remote employees do not have the context from the day-to-day conversations that occur in the office. While that is an unfortunate downside of telecommuting, the impact is lessened if pertinent conversations are documented electronically in an accessible area. Limit Distractions We’ve all been there. You get out of bed and notice that the house could be picked up. The dishes in the dishwasher could be put away. There’s a mountain of laundry to do. The grocery list is a mile long. Your pets want attention. Working remotely can offer a number of new distractions from your work life. It is impossible to eliminate them all, but it is certainly a good idea to limit them when possible. Designating specific times in the day for work and for home tasks is helpful. Some may be more productive outside of their house, such as in a coffee shop. This is a problem with office employees as well, especially ones who work in open floor plans. Wherever your office is, be sure to take steps to limit distractions where possible. Commit To Meetings Putting away distractions (phone, computer) during meetings is a good habit. By doing so, you’re giving everyone else your full attention, which can yield very productive conversations. This is more difficult for remote employees. Often we are leveraging our computers to engage in the meeting, so it isn’t as easy as putting your computer away. Remote employees can ensure that they are committed by full-screening the conferencing software that they are using, and ignoring other notifications. Overlap Schedules (where possible) Much like services, teams become more difficult to manage when they are distributed. You may have some employees that work different shifts for different reasons, whether that be their physical location or general lifestyle. While this is a difficult challenge, coordinating between remote team members is made much easier when you have some parts of your schedule that overlap. Live in the midwest, but your team is based out of NYC? Consider starting your day earlier to match your team members. Have an overnight team? Try to consistently rendezvous with that team when you start/end your day. Turn Off When you are an office employee, you will likely leave work sometime in the evening. This typically acts as a delineator for work/home life as many will not bring work home with them. For remote employees, however, it is easy to continue working well beyond an ordinary work shift as you always have your work at home. Thus it is important to “turn off” from work. Just think about it. How can you limit distractions during work if all you do is work? At some point, that chore list has to get done. Allowing yourself to turn-off work gives you the opportunity to schedule out that chore-list and spend time with your family. That way, when you come to work, you can focus on just that: work.
https://medium.com/disney-streaming/designing-distributed-systems-with-distributed-teams-b75994df4579
['Eric Meisel']
2019-02-21 22:52:34.354000+00:00
['Technology', 'Remote Working']
662
Self-Healing Testing using Deep Learning Algorithms
Credit For The Image Goes To: https://www.digitalocean.com/ Why Deep Learning? I’m part of the DBS iTES team, which supports the automation testing framework used by Software Development Engineer in Test (SDET) to test Treasury & Markets (T&M) applications. Each day, there are numerous regression tests conducted by various Investment Trading Technology (ITT) project teams to verify their application stability or assurance before applications go live. From time to time, project teams may encounter test failures due to upgrades, or UI changes which take plenty of time to debug and resolve. With the exponential growth in the Artificial Intelligence (AI) space, our team was curious to find out whether we could leverage some of the available solutions to resolve the above mentioned issues while running tests. The selection process for what feature to roll out was rather straightforward. Our team conducted a poll and put forth several features that SDET were able to choose from at one of our guild meetings. From the poll, we found out that 47% of the test engineers would like to have AI automatically fix their regression errors for them. Poll results from Testcenter Guild Meeting It is the End Result That Matters Early on, it was clear to our team that we needed a proof of concept (POC) to show that our wild idea was worthy of further pursuit. “To get what you want, you have to deserve what you want. The world is not yet a crazy enough place to reward a whole bunch of undeserving people.” ― Charles T. Munger With reference to papers and studies by other companies who have applied some form of AI/ML for testing, our team decided to explore the possibility of a mixed technique consisting of both visual recognition and element properties weight optimisation to resolve broken locators. Rapid prototyping was the approach chosen to get the project off the ground. First, a simple Object Detection Model trained on images retrieved from testing regression screenshots is deployed as an application on IBM® Red Hat® OpenShift®. This is followed by the development of an interface between our testing framework and the AI Application (VSDojo) hosted on Openshift to communicate on the screenshots of the errors as well as the returned predicted results. Close-up Validation Results from the object detection model What if the situation was more complex? Does the model hold up for the various user interface (UI) design within the ITT space? Validation Results from the object detection model More validation result from the object detection model Truth be told, there were times where the model did not perform to expectations. But more on that later. Example of localisation failure from the object detection model We will go into detailed explanations for the architectural design, strategy, and decision-making later on in this article. Briefly, the current Object Detection Model is manually labelled (Supervised Learning) to detect only two web elements buttons and inputs. Example of LabelImg python tool on sample training image We used an 80/20 train test split while keeping certain projects out of the training sample. Our team manually labelled each of the images using the Python tool LabelImg, which conveniently converts the coordinates and class definition as xml that would be used to train our model. Further evaluation was conducted using the 20% test split to ensure that the model can be used beyond the images it was trained on. The Journey, Challenges Faced, Hurdles We Overcame Now, let’s take a closer look at the overall architecture. The diagram below might look overwhelming, but it can be broken down into three broad categories: Object Detection Model (OpenShift), Logging (ELK) and DriveX (Object Storage). Overall Architecture Diagram The process kicks off the moment SDET run their tests via the Testcenter’s Serenity framework, and upon failure, the screenshot would be captured and sent to the Object Detection Model hosted on OpenShift (VSDojo). As mentioned in the previous section, the model would then return its prediction and coordinates of detected web elements back to the framework. The model’s ability to generalise across multiple ITT projects without having to programmatically specify conditions is not something to be sniffed at! In order to appreciate its elegance, it is necessary to understand “deeply” how it works. Diagram showing the basic flow of an CNN model Traditionally, the first thing that comes to mind when dealing with computer vision is Convolutional Neural Net (CNN) models. The name “convolutional neural network” indicates that the network employs a mathematical operation called convolution. Convolution is a specialised kind of linear operation. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers. In short, the given input image would have been stride through and flattened within each connected layer to extract key features. The final layer is typically a SoftMax or sigmoid function that outputs the predicted result. The backpropagation of errors from different training images and corresponding labels is what trains is the machine learning aspect, which allows the model to learn and classify other similar images. Difference between traditional image classification and object detection Since our team requires a solution that is not only able to identify the type of web elements but also their coordinates, the Object Detection Model lends itself naturally as a good solution. There are certainly many other object detection architectures currently practiced in the AI industry, including Single Shot Detection (SSD), ResNet, Faster RCNN, and You Only Look Once (YOLO). Each of them has their trade-offs, for instance, a larger model would have slower inference time for higher accuracy. Our team chose to stick with YOLOV3 due to its low inference time 22ms, to minimise impact and delay to the testing regression. The superior speed of YOLOV3 can be attributed to its unique bounding box strategy. For more information, refer to the original paper linked in the reference below. Finally, to overcome the problem of localisation failure, we plan to increase the amount of data used to train the model. We plan to do this using data augmentation, which involves generating different variations of the image by rotating and inverting the original images for improved generalisation by the model. This feature is included in the Tensorflow library under the pre-processing layer. Now that we have a better understanding of the model, let’s explore the next key piece of ELK logging. One key point made by our Tech Lead, before our team kicked off the project, was to ensure a proper test bed that would measure the performance of our proof of concept, instead of simply measuring the Intercept Over Union (IOU) or accuracy of our model. “If you cannot measure it, you cannot improve it.” ― William Thomson, Lord Kelvin. Staying true to the advice given, our team addressed the need by piping the results from the framework directly into ELK. Through the dashboard, we can pinpoint exactly where the model is not performing well, along with the stack trace which can be used to further optimise our model. The final essential component for the proof of concept would be the storage of metadata to support the sustainability of the model’s performance over time. The main idea revolves around storing past success and failure information in DriveX (Object Storage) that can be used as training data for the model. The other question to be addressed is the kind of information that our team chose to retain and automate the training process. For each successful test run, the original selector, coordinates, outer html and most importantly, images regarding the web elements, were kept. This data would eventually be the cornerstone on which to build future models. To recap, the process flow begins with SDET running regression via our framework. If there were web element failures due to UI changes, an API call would be made to Model resting on the OpenShift to retrieve predictions, allowing for self-healing to take place. Testing Logs would be sent over onto ELK to track real-time performance, and finally, metadata would be sent to DriveX to train future models. Contrarian View When our team members bounced ideas off each other at the start, we thought that visual recognition would not be a right fit for the task. The alternative was as a generic JavaScript or XPath filter “//button //input”, which would yield the same result. Subsequently, we realised that the approach of using XPath and JavaScript to capture all elements on a webpage would be excessive, as more processing would be required to evaluate elements that were not on the screen during failure. Furthermore, conventional html tags might not be sufficient given that the JavaScript site permits customisable tags. As such, we decided to stick with using the visual recognition model to identify and resolve possible match for failure. During subsequent phases, we used a weighted optimisation model to pick the best match out of all the returned web elements. The Weakest Link But we have yet to address the elephant in the room: the selection of best matching elements when multiple web elements are detected by the model. What we are doing right now is simply using regex to slice our model’s prediction outer html to match against the original input selector. Credit For The Image Goes To: https://blogs.oracle.com/ This approach is rather primitive, and we foresee issues such as false positive selections, which could arise due to the nature of implementation. Therefore, we plan to train another model to select best matching web element for any given selector. Essentially, we will have two models: one for detecting elements from webpage on failure, and a separate model to pick the right element for self-healing. Unfinished Business The list of crucial pending tasks to be completed includes piloting the proof of concept on a couple of projects to validate the performance live and using metadata from successful test runs, as well as building weighted optimisation model to select the best match. Once the proof of concept results shown in ELK have proven to be useful, our team will place greater emphasis on the sustainability and scalability of the system, mainly through the ability to retrain and replace model automatically if the performance of the existing model begins to decay. Deep Dive into Data When we retrieved statistics from ELK Stack, which we had been diligently sending our testing results to, we found that the AI model helped resolve 14.32% of the total test cases, which looks underwhelming at first glance. However, this proof of concept was scoped on just fixing input and buttons-related errors. After closer inspection, if we only consider input and button related test errors, the model has a successful fix rate of 77%, as seen in the chart below. Auto Healed Breakdown of failed testcases from ELK Our team also made some minor changes to improve the overall performance of the system, mainly the introduction of VSMode, which consists of three workflow states. These are the training mode, which is only activated during locale machine test run; evaluation mode, which is activated during CI/CD pipeline; and finally, disable mode for edge cases like performance testing, which does not need the feature. The workflow state would allow the model to learn from new data as the SDETs are developing new testcases, as well as performing self-healing on failed testcases within the CI/CD pipeline, improving overall efficiency of computing resources. Getting your hands dirty Inspired to get started with your own micro-innovation? Here are some articles and video tutorials to help you kick things off. If you are sitting on the fence whether you should start your own AI journey, know that motivation comes after action — all you need is to be brave enough to take the first step! A good place to start would be MNIST dataset where we train a model to predict handwritten numbers from 1 to 10. I highly recommend beginners to use high level wrappers such as Keras, PyTorch or scikit-learn to train their model. The wrapper makes the coding much more intuitive and easier to learn. Once you have mastered either of the recommended libraries, proceed to learn Tensorflow, which allows greater control. Check out the Deep Learning with Python, TensorFlow, and Keras tutorial. Note: The assumption is that you already have some prior knowledge or experience with coding in python. This exercise is a simple follow-along supervised learning model tutorial. The beauty of machine learning models the flexibility and ability to generalise across various unseen inputs, much like our VSDojo model’s ability to generalise across multiple ITT web application without any formal code in place. Interested to learn more? Check out the latest cutting-edge techniques that industry leaders are practicing or using within their projects here. The possibility for learning is endless but the best way to learn is to apply what you have learnt in a project. It can even be a small feature that does simple optimisation. References https://www.tricentis.com/resources/ai-in-software-testing-reality-check/ https://medium.com/hackernoon/introduction-to-deep-learning-9064d6b87a51 https://medium.com/deepquestai/train-object-detection-ai-with-6-lines-of-code-6d087063f6ff https://en.wikipedia.org/wiki/Convolutional_neural_network https://arxiv.org/abs/1804.02767 https://github.com/tzutalin/labelImg?fireglass_rsn=true https://www.tensorflow.org/learn
https://medium.com/dbs-tech-blog/self-healing-testing-using-deep-learning-algorithms-506bbbde9cd4
['Por Yee']
2021-07-06 15:48:20.526000+00:00
['Test Automation', 'Data', 'AI', 'Deep Learning', 'Technology']
663
How Advancement Of Technology Is Assisting Roofing Companies
In today’s connected world, just like most of the industries, the roofing sector is also discovering several opportunities, which they can utilize to make their work more proficient and also assist their business to run smoothly. Below are some of the examples of technology that the roofing companies use to improve their client experience. 1. Simplify Jobs using Aerial Imagery Aerial Photography from Airplanes Aerial photographers are continuously enhancing their craft, and these photographers are also now offering photos of properties and buildings that the roofing companies can request and procure. The photographs can later be used for measurements and estimation. Drone Photography One of the highly demanded technologies in recent year is drones. The unmanned aerial device is continuously capturing the public’s imagination as almost every organization is trying to take advantage of it, and the roofing industry is not an exception. Both the third-party service providers and individual roofing contractors have started to utilize the drones so that they can capture the aerial imagery and also other related data that are used for the roofing projects. 2. Speed up Quoting with Estimation Software Before starting a project, one of the essential parts of the roofer’s work is to come up with a proper and fair quote for their clients. The companies can decrease the time that consumes for the production of a quote by utilizing the benefits of software programs that will do the calculations for them. 3. Promote using Social Media Nowadays, it is not difficult to measure the impact social media has on the world as it has successfully altered everything starting from everyday social interactions to politics to retail industry sales. Social media can also be useful for roofing companies as they can efficiently market their offerings to their targeted audience. Technological advancement does not mean it only has to be digital. The roofing companies can also enhance the performance of a new roof with products like high-performance asphalt shingles that provide better colors, an improved look and also easy to install. Check This Out:
https://medium.com/@chrishtopher-henry-38679/how-advancement-of-technology-is-assisting-roofing-companies-93bed4a53526
[]
2020-11-24 10:44:18.484000+00:00
['Roofing', 'Solutions', 'Drones', 'Technology', 'Construction']
664
LifeLink CEO Greg Johnsen: AI vs. Healthcare’s ‘Fat Belly’
Audio + Transcript Greg Johnsen: It’s all about using technology to free smart minds up to do the things that they’re really good at. James Kotecki: This is Machine Meets World. Infinia ML’s ongoing conversation about artificial intelligence. I am James Kotecki joined by the CEO of LifeLink, Greg Johnsen. Welcome Greg. Greg Johnsen: Thank you. Good to be here James. James Kotecki: So Greg, your website says “chatbots are the future of healthcare.” So we’re talking about AI powered chatbots that don’t take the place of doctors it sounds like in your case at least, they take the place of an administrative conversation that I might have with somebody about billing for example, is that right? Greg Johnsen: That’s right. We’re not using our technology to replace a doctor’s diagnostic assessment or the skills of a physician. What we’re really focused on are all the workflows and protocols that are robotic that humans have to do. I’m talking about things like helping patients complete forms, reminding them to get to visits, doing intake, and triage, and questionnaires, and surveys and keeping them posted on their status, being a search engine for them so when they’re looking for something. That’s what we call the fat belly of cost and aggravation in a lot of the healthcare business. James Kotecki: Let’s go right to the million dollar jobs question that I’m sure you get all the time, “Is my job now on the line? I’m calling people all day trying to track down bills or set up appointments.” What do you tell those folks? Greg Johnsen: Well, it’s not like that work is pleasurable work or the best way to take a human brain and deploy it there. And what humans are really good at is all the empathy, empathetic discernment that has to happen in human dialogue and conversations and interaction. So if you can offload a lot of the rote procedural work that burns them out and move it to a mode that is low cost but also one that patients prefer. So, the way we explain it and articulate to large healthcare systems and pharma companies and healthcare organizations who we target is, “We’re going to shift up the skill level. We’re going to take the human teams that you have and allow them to shift up into the upper cognitive stack of what humans are really good at. And we’re going to take out the robotic work and move it to this layer that is not only cost effective but it’s the kind of engagement that consumers prefer to do with a digital assistant.” So this is about how to increase capacity in an industry that has been short and thin on service and capacity and how to solve that problem. James Kotecki: And we’re talking for context about people interacting with these bots via their keyboards and screens, right? We’re not talking about an automated voice calling them on the phone or are we? Greg Johnsen: That’s right. We’re predominantly, exclusively what we’re doing right now is chatbots in chat on mobile phones. You’re going back and forth with chat bubbles. James Kotecki: So, you’re saying that this is work that humans generally don’t want to do or aren’t best utilized for and I tend to agree with you there and yet it’s interesting that the interface that you’re building is meant to in effect replicate that interaction people have with a human. Greg Johnsen: That’s right. James Kotecki: Because if I go back and look at technology for the last 20 years, we’ve long had the ability to fill out a form on a website, right? But why are you choosing to do it in a human-like way? And as a follow-up question, how human are you trying to make this thing seem? Greg Johnsen: When you give a consumer, getting ready for a healthcare appointment, a complicated portal, or a set of steps to log in and download an app and learn it and find their username and password and register — it doesn’t happen. It’s why healthcare systems have about a two percent penetration in terms of reaching patients in workflows on mobile phones. Two percent. And it’s because it’s hard. Conversations are simple and they’re easy and they’re natural and there’s nothing to learn. It’s a powerful modality to connect with consumers at scale. And so, our whole architecture is built around that very simple idea which is, how do you connect with a patient, with a consumer in a complicated healthcare flow with a conversation that feels like a human conversation? Greg Johnsen: Now to be clear James, we’re not setting up these chatbot workflows to fool anybody. It’s not like the entity on the other end of that conversation ever represents itself as anything other than a digital assistant. They feel like the thing you’re working with is doing things for you and can kind of anticipate what your next move is or what you might be interested in doing or what you should do. When we start talking about AI and machine learning this is really where LifeLink is focused. It’s using smarts and technology to predict what the next best thing or move is in the context of a specific healthcare protocol and a specific human, a person, a patient who we know about. James Kotecki: Are we really talking about two separate AI systems here? One system is the one that can go into healthcare records or billing records and find the information that is relevant for that next moment in the conversation, maybe it’s being queried specifically like a search engine. And then the other kind of AI, which is being merged together in your product, is that conversational part that makes it seem more natural and more like a conversational interface. Am I right to think of those as two separate things, and if so which one is harder? Greg Johnsen: Well, they’re certainly different applications of AI. The front end thing you might see or associate with AI with chatbots is natural language processing which is understanding what the patient — what the consumer is wanting to do. And so, you have to use AI to understand what their intent is, their intent, and in the context of the entities or things that this domain is all about. Greg Johnsen: There’s also this idea of how to make smart predictions about where things can go. So for example, one of the big things in healthcare is patients not showing up for surgery, expensive. There’s an OR that costs $100 a minute. If you can get patients to show up on time and be ready it’s a big deal, but you need lots of humans to do that, to call people, to give them their instructions, to remind them, to call them every day, to get a ride for them. It’s raining, make sure you get a ride. That’s complicated stuff but if it’s chatbot-driven then it can happen at scale. And the AI used to make that happen is all about predicting which patients are most likely to not show up or to show up late or to show up unprepared. Greg Johnsen: So if you can begin to have different conversations with patients based on what you know about those kinds of patients in these kinds of workflows at scale, suddenly those conversations get a lot more powerful because they’re not just natural, well-timed conversations. They’re pinpointed, precision conversations that tie and stitch back to a value proposition. Fewer no-shows. Fewer late-shows. For example. Greg Johnsen: There’s two different applications of AI and there are 17 more. I mean, predicting wait times in an emergency department is another application. It’s another way of — it’s in that camp of machine learning where you’re looking at all the tests and the turnaround times for lab tests and blood tests and imaging tests over millions of patient visits, which is what we do. So you can communicate that to the patient and say, “Looks like your lab test just went in. It looks like it’s going to be about a 35-minute wait. You might want to get comfortable and we’ll let you know when you’re about five minutes away from getting your physician to come out and talk to you about it.” That’s a lot of computing to make that happen. Greg Johnsen: And I think one of the really fascinating things about this kind of technology in healthcare is that it’s such a laboratory for human decisioning, human bias, human behaviors. How many nudges it takes to reach out to somebody to get them to engage, and does that have anything to do with their zip code? Does it have anything to do with their gender, or their age? You’re digitizing all the conversations and all the sentiments all along these workflows, so suddenly you have this consumer research platform. Are there patterns about, should we be incenting certain consumers in certain areas to get a wellness visit more often and should we pay for their ride? James Kotecki: Do you run into any issues of AI bias? Greg Johnsen: Probably less than in other AI areas. We’re not generating language. We use NLP to understand and then using predefined responses. And by the way, they have to be in healthcare. We’re literally tuning into the approved, medically compliant workflows and dialogues that have to be delivered. James Kotecki: One last rapid question before you go, how far along the path do you think we are to artificial general intelligence based on the work that you’re doing now or is that not even really even the right question to ask? Greg Johnsen: Clinical care is better than it has ever been in healthcare, and we have highly-trained physicians who know more, have access to more than ever before. The problem is they’re not freed up to do that work. They’re encumbered by a lot of the administrivia and inefficiency of workflow. So, I really think for the next decade it’s all about using technology to free smart minds up to do the things that they’re really good at. And I don’t have an opinion on when general intelligence will arrive, I would only be speculating. James Kotecki: Well, that’s what we love to do sometimes, but hey Greg Johnsen, CEO of LifeLink, I really appreciate you joining us today and sharing your insights here on Machine Meets World. Greg Johnsen: Thank you for having me. Nice meeting you James. James Kotecki: And thank you so much for watching. I’m James Kotecki. You can email the show [email protected]. Like us, share us, comment, you know what to do. That’s been what happens when Machine Meets World.
https://medium.com/machine-meets-world/lifelink-ceo-greg-johnsen-ai-guts-the-fat-belly-of-healthcare-costs-de25db6962ff
['James Kotecki']
2020-10-19 16:08:27.215000+00:00
['Machine Learning', 'Artificial Intelligence', 'Chatbots', 'Technology', 'Healthcare']
665
The First Lines of Code: Fixing Healthcare from the Inside Out
I quickly learned that trial by fire is the only way to get up to speed. In my first three months there were: Responding to a major security incident (which turned out to not actually be an incident at all) Launching the largest reform effort in Medicare’s history, The Quality Payment Program alongside our CMS colleagues Transitioning the Digital Service team out of a project that consumed every essence of our identity And quickly losing a large portion of that team who, like the Director before me, had been there much longer than anyone initially intended Soon after getting my bearings the team embarked on a new path — exploring the power of CMS in the context of technology stifling or driving our future health system. In March of 2017 alongside the Office of Enterprise Data and Analytics, we launched our first new project, Blue Button 2.0. Bigger than just the product we built, Blue Button put in motion a new approach to data sharing in healthcare and established the foundation for CMS to share more timely information using modern technology and the FHIR (Fast Healthcare Interoperability Resource) standard. Industry was dragging it’s feet to share data, citing every excuse imaginable and taking the wheel to drive the revolution or the “data quake” was our only option. A New Path With standards based APIs as the new foundation for data sharing at CMS and across healthcare, research into what our future health system needs commenced. We traveled to numerous cities to meet with patients, providers, startups, health systems, venture capital, and private equity. We hosted round tables and spent hours sticky noting and white boarding, identifying what kinds of products should be put in motion. A policy and technical implementation agenda emerged, and together these two parts formed a powerful whole that ignited enthusiasm, moved CMS into position as a leader in health technology and began to move an entire industry forward. We had learned that industry was stalled and that without CMS taking charge the future that many leaders talked about as their vision would not actually become reality. Every conversation we would hear the same type of sentiments, “we can’t adopt value based payment models or the tech to support them at scale until CMS does,” or “we won’t share our data until we are told we have to.” And through those conversations two key themes guided our path forward: Doctors need real time access to data that matters in a way that is usable to make decisions. Value based care built on a Fee For Service chassis will always limit the outcomes we can expect to see. And those limitations are mostly imposed by systems and technology. What We Accomplished In 2019 our team launched Data at the Point of Care, the first bulk claims data API for providers, allowing all Medicare providers to receive claims data for their Medicare patient population. On the other end of the systems and data pipeline, the Digital Service team was busy working with CMS to move a system that process $500 billion in payments a year, made up of 13 million lines of COBOL on the mainframe, with an 8–12 month cycle for introducing major changes or bug fixes to the cloud. Modern infrastructure is a necessity in a health system that pays for value. On top of the hands on keyboard work, the Digital Service team was assisting CMS in identifying what an agency that prioritizes technology and successful implementation would need. This included helping hire technical talent, proliferating modern software development practices and tools, pushing for security over compliance, and developing a deep understanding across the agency of how decisions impact intended end users — not just the middlemen that run rampant in healthcare — but care teams, patients and their families. In government it is rare that technical expertise is a part of policy conversations. Living in a world where technology underpins every aspect of our lives, the implementation of every policy will rely on technology or have a technical implication. The Digital Service team at CMS started the journey with policy within the Quality Payment Program. From user testing of the program name to eventually releasing the first wireframes along side policy in the Federal Register, the team was one of the first in the country to incorporate user research and technical expertise into the policy making process. Our policy work didn’t end there, CMS policy teams across the agency opened their doors and let us assist on regulation after regulation. We provided feedback and suggestions on what implementation of policies might look like and technical requirements that would bring their vision to reality. We bridged a gap between policy and tech in a way that had never been done at CMS or really any government agency before. We spent countless days facilitating conversations, thinking through how you engage with users — something that might seem simple if you don’t know what the Paperwork Reduction Act is. Our skilled team not only came with technical expertise but the ability to navigate complex bureaucracy. We worked with CMS to develop new ways to procure services and products — building, testing and iterating on what works when agile development meets the Federal Acquisition Regulation. Procurement strategy became a success criteria for the projects we took on. We were shaping not only the procurements for projects we worked on but sitting in on other Technical Evaluation Panels, reviewing and writing Statements of Objectives and working with willing Contracting staff to share new ways of approaching problems. We helped the CISOs office think through ensuring the security of CMS systems and data, including awarding the first true Bug Bounty contract at HHS. Someone on the team was always the voice in the back of the room reminding everyone, “compliance is not security!” We assisted in responding to incidents, supporting our CMS colleagues as they faced hard challenges. We took lessons learned from those incidents and shared them in hopes of shaping a new future for Incident Management at CMS. Building the Team And while the team shrunk down to five people at our lowest, it was back up to over 25 on the day that I left. The time when we were only five people was the hardest for me as a leader and for our team. I was constantly making contingency plans while trying to embody the hope that our already burned out team needed. We were stretched too thin across too many projects all ready to burst at any moment. Based on the frequency of our 10 or 11pm phone calls and texts, I don’t think any of us slept much. One Friday I got a phone call from someone on the team who suggested we shut down all projects except one. I felt like I had failed as a leader and would fail to deliver on the promises I had made across the agency. That weekend, on giant sticky pads running across the wall in my bedroom I mapped out the next three months, marking when I thought new people would come. And in January, the month where I was worried it would potentially be *only me* left, I wrote a big circle with the words “nuclear option” next to them. I called an emergency team meeting the next Monday and reminded everyone why we were here. I attempted to reassure fears even though I was full of them myself. We just needed to hang on. We had to be hopeful and scrappy enough to find a way. We survived this far, through threats of an administration change and threats of budget and resource cuts. We did it while some of our major support systems doubted us and sometimes worked against us. And then new people showed up. We breathed collective sighs of relief as new faces were in the tiny conference room that we called our HQ. When I started getting frustrated phone calls and emails asking why my team was squatting in other peoples offices, I knew we made it. We had outgrown the tiny conference room. My proudest moment was sitting in our last “onsite offsite” and looking around at all the faces at the table. Brilliant individuals who left their other jobs to be part of the team at CMS because they believed in our mission, the runway we had to make a difference and a future where healthcare could really work for patients and providers. The work was personal — everyone on the team had a story of their own experience with the system, or through their children or their parents. They brought empathy and combined it with raw talent and skill to push through the most difficult days just trying to make a small difference. My last “Onsite Offsite” The projects we worked on constantly reminded me that while thousands of companies are playing buzzword bingo with their latest solution for healthcare, the basic improvements that we actually need are still years behind. If we don’t address them, we can’t move forward. And addressing them is exactly what our team has started to do. Days of reflecting in the canyon allowed me to see that the people are what matter most — the ones we serve and the ones who show up to do the work. And crossing through the “birth canal” after my last night sleeping near the Colorado River, I was unpacking not just what we had done but what would come in the future. My dad was just in front of me on the trail, quiet this time. No creation stories, just the occasional, “you ok Shannie?” with a quick stop to catch our breaths. There are two things that we can count on to make us feel small, the vastness of our natural environment and being in huge groups of people. Standing at the bottom the Grand Canyon, gazing from river to rim, seeing ecosystems that only exist below the Kaibab formation, cliff dwellings where thousands of years ago people lived, I always feel like a single spec of dust in a massive desert. And when I emerged from the last switchback at the top of the canyon I was met with the other — something I surprisingly hadn’t seen there before — thousands of people. I threw my pack on the ground and started crying. I was overwhelmed and exhausted while simultaneously healed and ready for what would come next. It was the perfect ending to my nearly four years at the United States Digital Service. What’s Next If you are out looking for a way to make healthcare better, come to the source. CMS is the heart of American healthcare and it needs help. The Digital Service team is always hiring and so is CMS. Your country needs you. As for me, I kept searching for something that would excite me as much as the work I was doing at CMS. While opportunities crossed my path, none felt as impactful. And so coming out of the canyon was a different kind of rebirth this trip. It was a recognition that while I was ready to move on from the Digital Service role, I was not ready to move on from CMS or Federal Service. Starting next week, I will be continuing my work at CMS as the Chief Technology Officer at CMMI, a team that I am beyond thrilled to join!
https://medium.com/@shannonsartin/the-first-lines-of-code-fixing-healthcare-from-the-inside-out-aba5da686605
['Shannon Sartin']
2020-01-16 19:09:25.034000+00:00
['Healthcare', 'Civic Innovation', 'Civictech', 'Government', 'Healthcare Technology']
666
Urban Sharing’s new business developer Luis Aldana will help bring our smart rebalancing tool to the world
Urban Sharing’s new business developer Luis Aldana will help bring our smart rebalancing tool to the world Ariana Hendrix Follow Jan 14 · 6 min read Luis, a bicycle enthusiast from Guadalajara, Mexico, is here to help bring our predictive rebalancing and supervisor tool to the world. Like Urban Sharing, his background in micromobility was born out of bike share operations. He says this is the secret ingredient to creating the tools that customers actually need. What brought you into micromobility? Luis: It started with my passion for bicycles. In Mexico I used to move around everywhere by bicycle. It was kind of an act of rebellion, almost revolutionary, since the streets were so dangerous and hostile for cyclists. The cycling community in Guadalajara was quite close and I met a lot of people, including the guys from BKT. They had been working with urban furniture and then started a new company dedicated to public bike-sharing systems. When they won the contract to run Guadalajara’s new system I was studying urban planning, so joining them was a perfect match, and we started to work together on the planning and implementation of the system. Once we launched, I took over management of the logistics and distribution team, which was my main role. But we were a small team, so I had the opportunity to explore every single aspect of the operations, including technology, bicycles, stations, government contracts, and even how the payments were executed on a technical level. Building the system was all new for us so we had to do everything from scratch — that was really exciting. What were your key learnings from that experience? Working with the fantastic BKT team was one of the best experiences I have had. I would say working with them taught me about the complexity of the rebalancing challenge. In the beginning I was impressed by how few options there were for tools to manage operations. We tried some different ones but still had to do a lot of things manually. So I learned how big the problem was, and how even people working closely with public bike share systems can underestimate the complexity and the degree of the things needed to do operations well. For example, the local government just sees a light vehicle like a bicycle and they think, “How hard could it be? They’re just bicycles, you’re not dealing with trains or airplanes.” But it’s not like with trains, which move in a straight line. The nature of bike sharing is that it’s a network, and that exponentially complicates things. Where did that discovery lead you? I co-founded a company called Brick in 2017, where we developed Clinker. The idea was to create a detailed and automatic tool that could solve vehicle redistribution and all other operations aspects of the micromobility industry. This is one of the biggest challenges in any micromobiility operation, no matter what vehicle it is. It usually consumes up to 60% of an operator’s budget. What we were creating evolved to become an operator suite, a product called Clinker. It is a tool for redistribution, but also for station and bicycle maintenance; for the user of a vehicle; for customer service; for governments and third parties; and other players involved in the operation. Between 2017–2018, we worked on the concept. In 2019, we dedicated the first part of the year to developing a prototype, which was a fully functional tool. This was amazing for us since we did it in such a short period of time. In September I met Kristian (Urban Sharing’s VP of Strategy & Business Development) at the NABSA conference in Indianapolis. He asked me if I would consider continuing my work as part of Urban Sharing and I was like, “Yeah, let’s go to Oslo!” Tell us about your new role at Urban Sharing. What do you hope to achieve, and what knowledge are you bringing? I’m joining the business development team as someone who, while talking about the product, also has a very granular understanding of what’s needed in the market, and what the practical needs of the operators are. A lot of companies that are doing similar things have built their tools from a lab. They have never experienced shared mobility operations first-hand and on a daily basis. We’re not guessing, or imagining how things are for operations teams. We know how things are. There are so many small details to consider. For example, when changing or introducing operator tools, you think it looks easy on the app, but for those using it, changing an entire way of doing things can become a huge challenge for the operations teams. I’ve learned that you have to consider this at all times. That’s what makes a really good product. I have that experience, and so does Urban Sharing. I believe that’s what differentiates us. How will this benefit our customers? Our solution reduces costs for our customers and partners because we’re not experimenting. We’re an extremely experienced team, so there is not going to be trial and error. One of the things I want our customers to experience is this sense of “Finally, there is a product that can really tackle what’s going on.” For them, operations are very rigid structures, so even just testing things can be extremely complex. Transitioning from one tool to another could take months, so we take it really seriously that they should be able to start running with it once they choose our product. And the way we honor that decision is by giving them an extremely reliable product. Another important thing is that the tool has to make a lot of sense to the final operator, which is the person driving the truck, or doing the maintenance. That is the person who knows better than anyone what is needed, and they’re often forgotten. We’re creating a bridge of understanding where we know both sides of the story. Luis and Kristian Brink, VP of Business Development, share ideas at Urban Sharing’s Oslo office. We’re so excited to have you as a full-time team member at our headquarters in Oslo. What are your first impressions after moving here? The first thing I noticed was that the quality of the air here is fabulous. It’s so clean and fresh, and the sky is so blue! I love the cold weather. I’m also excited to learn Norwegian. I think the Scandinavian culture is amazing, and I especially love the aesthetics of the city, and how everything is so well thought-out and organized. It makes you feel super comfortable. Finally, what is your overall vision for the future of urban mobility? We’re quickly realizing that cars — personal cars especially — are the disease of the century. I hope that we can rapidly start to take action on that knowledge. We know that around 70% of trips in cities are only one mile in length, which is perfect for micromobility. In five years, it would be ideal to get to a place where 20–30% of urban trips are being made by micromobility. When thinking about the future, these short trips should be made by shared vehicles, because it doesn’t make sense owning anything if you can share it instead. Right now, in general only about .5% of trips are made by shared micromobility vehicles. I think the opportunities for the future of urban mobility are immense, and this game is just getting started.
https://medium.com/urbansharing/urban-sharings-new-business-developer-luis-aldana-will-help-bring-smart-rebalancing-to-the-world-c502dc2f7b32
['Ariana Hendrix']
2020-01-14 19:12:47.279000+00:00
['Software', 'Mobility', 'Transportation', 'Technology', 'Business Development']
667
Tesla’s Full Self Driving Beta Has Changed the Game
Tesla’s Full Self Driving Beta Has Changed the Game It’s the Quantum Leap that Elon Musk Promised Tesla FSD Beta visualisation (image source) Tesla’s beta version of their Full Self Driving (FSD) software has been released to a small number of people and it’s mind-blowing! This release gives us a glimpse at the (FSD) rewrite that Tesla has been working on in the last year or so. The aim of this limited beta is to prove the FSD package is safe enough to roll out to a larger audience. Just before this release, I wrote an article detailing what Tesla has been working on and how this release was going to be a “quantum leap”. Honestly, I think they’ve lived up to the hype. Here’s everything exciting that we’ve seen in the beta so far and what it means for the future of autonomous driving. FSD Beta Delivers New Features For a long time now, any Tesla equipped with Autopilot has been able to stay in its lane and monitor its speed based on traffic. This feature was mostly for highway driving but overtime was good enough that it could be used on city streets, with a few caveats. These features were called Enhanced Autopilot and the Full Self Driving package unlocked automatic lane changes and exiting/merging on to highways along with most recently, understanding traffic lights. Each of these progressions slowly moved us closer to the promised full city street driving, where the car could take you from point A to point B with little to no interventions. However, Elon said that their existing approach was trapped in a “local maximum” and therefore it looked unlikely to deliver the full feature set. Elon Musk Autopilot trapped in local maximum To overcome this, the team has re-written the core FSD architecture meaning multiple images from the front camera array can now be labelled and processed together and “correlated in time”. Before each image was processed independently which is why they were struggling to progress. So what does all this mean? What has it delivered? Here’s a list of all the cool features that the FSD beta has unlocked and links to example footage from the various beta testers. One thing to note whilst watching these clips is that the UI is a “developer” interface. Its purpose is usually to show Tesla engineers what the car is seeing when developing and debugging the FSD software. For this beta, it has been enabled to the limited user set when using FSD on city streets to give them a better understanding of what the car is doing and seeing. This is important as the car is still in beta so will not be perfect. Giving the beta testers some on-screen visualisations helps them understand what the car is seeing and hopefully prevent any surprise manoeuvers. The Future of Full Self Driving As you watch some of the clips above, you’ll probably notice a few things. Firstly, the driving experience is by no means perfect and second, that drivers are still intervening to help the car out. But this is ok, in fact, it's what Tesla wants. Tesla is using each of the beta testers as a driving teacher for their software. They’ve been developing this for a while now, testing between themselves privately and Elon himself stated that he’s had an alpha version of this FSD rewrite for a while. The most important test for the software though is large scale, real-world use. No amount of simulation can equal the day to day situations of the real world, or as Elon would say Think back to when you were learning to drive. You start off with some theory and you’ve probably watched parents and others driving for years before getting in the driver's seat but none of this prepares you for actually driving on the road by yourself. We learn by driving on real roads but with an experienced teacher who is there to guide us and intervene when we make mistakes. This is exactly the approach Tesla is taking with its FSD software. The FSD beta is a learner driver taking to the road for the first time and Tesla’s entire fleet of vehicle owners will be the teachers. Already, in just a week, the beta testers have received updates that have vastly improved their experience and brought new features. This is why a gradual release of this beta is so important. As more and more bugs are ironed out and this beta makes its way into more vehicles, not only in the US but worldwide, it’s only a matter of time before it is driving without its human teacher. How long this will take is still to be seen, however, Elon has said that if all goes well with the initial testing then a public beta should be released in the US by the end of the year. Telsa’s Generic Approach vs Waymo The final huge thing to point out here is Tesla’s approach to full self-driving. You might wonder what’s taking Tesla so long when there are completely autonomous vehicles on the road today from companies like Waymo, which require no human in the driver's seat. The reason Waymo can do this is that they use highly detailed pre-built maps that “highlight information such as curbs and sidewalks, lane markers, crosswalks, traffic lights, stop signs, and other road features.” This means they can only drive in areas that have been mapped but it gives them a detailed understanding of what the world looks like at the cars current GPS coordinates. They use cameras and lidar sensors to detect other cars, road signs and traffic light colours so the car can drive safely on public roads. This approach gets you to “real” full self-driving a lot quicker. This is because it’s predictable, Waymo can guarantee that the car knows everything about its surroundings and therefore only has to prove it’s capable to drive in this location. The downside is that it requires pre-built maps, meaning the car cannot drive itself in a location that has never been mapped and if an area changes (roadworks, roads are updated etc.), the maps need to be updated. The lidar sensors used in the car are also currently very expensive, meaning mass production costs could be too high. Commercial releases of these vehicles do not exist yet and therefore currently, Waymo vehicles are used as a Taxi service. In contrast, Tesla’s approach is more “generic”. Tesla is effectively trying to solve human vision and decision making in the context of driving a vehicle. To keep costs low, they don’t use Lidar sensors (that are great at measuring the distance of objects) and instead hope to solve this problem the way humans do, with vision (using cameras). This keeps the cost of the vehicles low, as cameras are relatively cheap but the downside is the software takes longer to develop. This approach means that Tesla vehicles are being built to drive anywhere and deal with any scenario that a human driver would encounter. They are effectively building a machine that sees and interprets the real world as a human would. This means their cars can and will be full self-driving almost anywhere on earth. Complications arise as you enter into different countries with different road laws and signs, however, with a fleet of vehicles already collecting data in these countries, it’s just a matter of using this data to “teach” the vehicle to drive under these new restrictions. This is exactly the same thing that we do as humans when renting cars when on vacation in a different country.
https://medium.com/swlh/teslas-full-self-driving-beta-has-changed-the-game-367c7b53523d
['Lewis Gavin']
2020-11-03 20:06:05.621000+00:00
['Technology', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Tesla']
668
Symaps Spatial Data Catalog
Symaps is an AI-powered location platform that helps you find the best locations for your business. Our team of experienced data scientists gathers, processes, analyzes, and models large volumes of raw multidimensional data (public and private) to distill it into practical, easy-to-use, and visually-appealing map-based insights. Symaps’s technology has the ability to compile and enrich different types of large datasets for usage in deep geospatial analysis. Adding many sources of geospatial datasets to a smart map makes it possible for you to to analyse a location in depth. We typically work with 3 different datasets: premium data, public sources and private business datasets. I. Symaps Premium Spatial Data Streams People Flow Data (Anonymous footfall from millions of devices) This data is available in more than +180 countries and usually consist of anonymous headmaps from the data coming from millions of anonymous devices. This highly accurate data consists of anonymous, GDPR-compliant GPS tracks, which are both highly detailed and precise, and maintain anonymity and privacy. Thanks to this data, we can determine the volume of people who pass through a selected address according to the time of the day, and identify daily and weekly trends. There’s no need to make any uninformed decisions on these crucial metrics. top-notch people flow visualization Symaps Global Places (Insights into 200M places) This is the location data for business establishments, brands, competitors, restaurants, points of interest, landmarks, schools, attractions, and more… Symaps Places insights deliver powerful business analytics that explain how consumers connect with places in the real-world. Symaps.io continuously updates (every 60 seconds) and validates Global Places data to capture real-world changes and give the best insights possible. Symaps Demographic Data (Worldwide) : Even where there is no census. Symaps.io integrates spatial statistical methods, exploiting the power of machine learning, to transform and disaggregate population counts at administrative unit levels to 100x100m grid square level, exploiting relationships with spatial covariate layers from satellites and other sources. We enhance this data further this data by adding age-Gender pyramid distribution (5 years gaps). Understsand the footfall around your stores II. Public sources Census data : Symaps integrates the most recent census data including information about people such as age, gender, income, household types, and more. Geographies: Digital boundaries for data aggregation and display on a map. III. Private business data Customer transaction data & loyalty card data: We enable our users to enrich Symaps’ service with their own data. They broaden our analysis with historical transaction data & loyalty card data Our platform is built for emerging and established brands & retailers looking to scale their business or optimize their existing locations. With a strong focus on retail, FMCG, and real estate, Symaps combines the power of Machine Learning with AI and acts as the backbone for companies for whom location counts when it comes to making key strategic decisions. Want to have access to our technology? Feel free to reach out here!
https://medium.com/@symaps/symaps-spatial-data-catalog-92e8eca56683
[]
2020-12-18 12:10:13.029000+00:00
['AI', 'Consulting', 'Geomarketing', 'Retail Technology', 'Data']
669
The World’s First Desktop Rubber 3D Printer
Atomstack Cambrian, the most advanced 3D printer for thermoplastic rubber filament. With improved elasticity, high resiliency, Cambrian prints superior rubber ( 50–70A hardness) as well as traditional filaments such as PLA, TPU, TPE, ABS, PETG. The Most Advanced Elastic Rubber Filament For End-Use Parts With exclusive Rub60 & Rub70 (50–70A hardness) filaments, the exclusive rubber extruder is capable of seamlessly producing virtually any end-use rubber products. Created for Real-World Applications Atomstack invented a special rubber filament with hardness of 50–70A that has never been available before for desktop 3D printers. Atomstack Cambrian turns your dreams into reality and prints them with one-click. High-Performance FDM Printing Atomstack Cambrian perfectly prints traditional filaments such as PLA, TPU, TPE, ABS, & PETG. Architecture models, medical supplies and other creative ideas easily and professionally. Create Affordable End Use Products Save time and money with Atomstack Cambrian. A custom set of sneakers cost $10 only to print! Design anything you can imagine and unleash your creativity into real-life products. Every Detail Matters Reusable Filament Plate Design Atomstack cares about the environment as much as you do. We designed reusable filament plates for a smaller carbon footprint and better value. Applications Atomstack Cambrian will be launched on Kickstarter on Jan 5th, 2021 at 9 AM, EDT! If you are a 3D printing geek, you sure don’t want to miss this!
https://medium.com/@crowdfundingdaily/the-worlds-first-desktop-rubber-3d-printer-e9d1c837625f
['Crowdfunding Daily']
2020-12-17 10:16:19.244000+00:00
['Gadgets', 'Kickstarter', 'Crowdfunding', '3D Printing', 'Technology']
670
A-Z Of Exploratory Data Analysis Under 10 mins
3. UNIVARIATE ANALYSIS Univariate analysis, as the name says, simply means analysis using a single variable. This analysis gives the frequency/count of occurrences of the variable and lets us understand the distribution of that variable at various values. 3.1. PROBABILITY DENSITY FUNCTION (PDF) : In PDF plot, X-axis is the feature on which analysis is done and the Y-axis is the count/frequency of occurrence of that particular X-axis value in the data. Hence the term “Density” in PDF. import seaborn as sns sns.set_style("whitegrid") Seaborn is the library that provides various types of plots for analysis. sns.FacetGrid(haberman_data,hue='surv_status',height=5).map(sns.distplot,'age').add_legend() Output : PDF of Age Observations : Major overlapping is observed, so we can not clearly say about the dependency of age on survival. A rough estimate that patients age 20–50 have a slightly higher rate of survival and patients age 75–90 have a lower rate of survival. Age can be considered as a dependent variable. sns.FacetGrid(haberman_data,hue='surv_status',height=5).map(sns.distplot,'op_year').add_legend() Output : PDF of Operation Year Observations: The overlap is huge. Operation year alone is not a highly dependent variable. sns.FacetGrid(haberman_data,hue='surv_status',height=5).map(sns.distplot,'axil_nodes').add_legend() Output : PDF of Axillary nodes Observations: Patients with 0 nodes have a high probability of survival. Axillary nodes can be used as a dependent variable. The disadvantage of PDF: In PDF, we can’t say exactly how many data points are in a range/ lower to a value/ higher than a particular value. 3.2. CUMULATIVE DENSITY FUNCTION (CDF) : To know the number of data points below/above a particular value, CDF is very useful. Let’s start with segregating data according to the class of survival rate. survival_yes = haberman_data[haberman_data['surv_status']==1] survival_no = haberman_data[haberman_data['surv_status']==2] Now, let us analyze these segregated data sets. import numpy as np import matplotlib.pyplot as plt count, bin_edges = np.histogram(survival_no['age'], bins=10, density = True) #count : the number of data points at that particular age value #bin_edges :the seperation values of the X-axis (the feature under analysis) #bins = the number of buckets of seperation pdf = count/sum(count) print(pdf) # To get cdf, we want cumulative values of the count. In numpy, cumsum() does cumulative sum cdf = np.cumsum(pdf) print(cdf) count, bin_edges = np.histogram(survival_yes['age'], bins=10, density = True) pdf2 = count/sum(count) cdf2 = np.cumsum(pdf2) plt.plot(bin_edges[1:],pdf,label='yes') plt.plot(bin_edges[1:], cdf,label='yes') plt.plot(bin_edges[1:],pdf2,label='no') plt.plot(bin_edges[1:], cdf2,label='no') plt.legend() #adding labels plt.xlabel("AGE") plt.ylabel("FREQUENCY") Output : [0.03703704 0.12345679 0.19753086 0.19753086 0.13580247 0.12345679 0.09876543 0.04938272 0.02469136 0.01234568] [0.03703704 0.16049383 0.35802469 0.55555556 0.69135802 0.81481481 0.91358025 0.96296296 0.98765432 1. ] Text(0, 0.5, 'FREQUENCY') CDF, PDF of segregated data on age Observations: There are around 80% of data points have age values less than or equal to 60 count, bin_edges = np.histogram(survival_no['axil_nodes'], bins=10, density = True) pdf = count/sum(count) print(pdf) cdf = np.cumsum(pdf) print(cdf) count, bin_edges = np.histogram(survival_yes['axil_nodes'], bins=10, density = True) pdf2 = count/sum(count) cdf2 = np.cumsum(pdf2) plt.plot(bin_edges[1:],pdf,label='yes') plt.plot(bin_edges[1:], cdf,label='yes') plt.plot(bin_edges[1:],pdf2,label='no') plt.plot(bin_edges[1:], cdf2,label='no') plt.legend() plt.xlabel("AXIL_NODES") plt.ylabel("FREQUENCY") Output : [0.56790123 0.14814815 0.13580247 0.04938272 0.07407407 0. 0.01234568 0. 0. 0.01234568] [0.56790123 0.71604938 0.85185185 0.90123457 0.97530864 0.97530864 0.98765432 0.98765432 0.98765432 1. ] Text(0, 0.5, 'FREQUENCY') CDF, PDF of segregated data on axillary nodes Observations: There are around 90% of data points have axil_node values less than or equal to 10 3.3. BOX PLOTS Before exploring box plots, few commonly used statistics terms are, median (50th quartile) is the middlemost value of the sorted data 25th quartile is the value in sorted data which has 25% of the data less than it and 75% of the data above it 75th quartile is the value in sorted data which has 75% of the data less than it and 25% of the data above it. In the box plot, the lower line represents the 25th quartile, the middle line represents the Median/50th quartile, the upper line represents the 75th quartile. And the whiskers represent the minimum and maximum in most of the plots or some complex statistical values. Whiskers are not min and max values, when seaborn is used. sns.boxplot(x='surv_status',y='age', data=haberman_data) sns.boxplot(x='surv_status',y='axil_nodes', data=haberman_data) sns.boxplot(x='surv_status',y='op_year', data=haberman_data) Output :
https://towardsdatascience.com/a-z-of-exploratory-data-analysis-under-10-mins-aae0f598dfff
['Ramya Vidiyala']
2020-05-14 14:39:14.813000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Education', 'Data Science']
671
Review of DataCamp - Learning Skills for the Future of Work
Data Science is one of the hottest fields in computer science nowadays and is expected to grow exponentially in the coming years. The number of job postings asking for data scientists has steadily increased, giving rise to a number of online Ed-tech companies that sell courses to teach buyers data wrangling chops. According to LinkedIn, in August 2018 in the U.S., the tech industry was experiencing a shortage of 150,000 data scientists and the demand for skills in data science has only gained more steam since then. DataCamp is an upstart in the online Ed-tech industry that trains folks with little to no technological background in data science and associated fields. DataCamp and its online peers are filling the gap of not enough technically skilled professionals by providing supplementary education at a fraction of the cost, in lieu of formal university degrees. In a nutshell, DataCamp subscription is worth its money only if you are a newbie to the data science field. For intermediate and advanced users, the courses aren’t thorough and deep enough to deliver much value. Read on for a detailed review. DataCamp was founded by Martijn Theuwissen, Jonathan Cornelissen, and Dieter De Mesmaeker in the year 2013. Since then DataCamp has come a long way and now offers a number of courses varying in skill and difficulty level for languages and tools used extensively in the data science domain such as Python, R, Power BI, Excel etc. So far the company is purported to have raised $31.1 million dollars, thanks to the explosive growth in the number of users, with the most recent funding raise of $25 million in 2017. The site provides various choices for users to customize their learning experience. The spectrum of instruction formats is as follows: Pursue individual courses catering to a specific and narrow topic. catering to a specific and narrow topic. Enroll in a skill track that consists of a series of curated courses aimed at teaching a broader skill-set e.g. “Importing and Cleaning Data” or “Data Manipulation” . that consists of a series of curated courses aimed at teaching a broader skill-set e.g. “Importing and Cleaning Data” or “Data Manipulation” . Follow a career track that focuses on teaching several skills expected of a industry role such as a data scientist, R programmer, data engineer, machine learning scientist etc. DataCamp offers various pricing tiers for individuals as well as businesses. However, you can only purchase annual subscriptions, though the marketing displays the monthly as a monthly cost (billed yearly). This is a slight departure from other Ed-tech companies such as Educative that offer monthly subscriptions. The free tier lets you access the first chapter of each course in addition to seven projects, three practice challenges and one skill assessment. You may sign-up to get a look and feel of the user interface and play around. Overall, the quality of the courses is pretty solid. Each course is designed to be interactive with in-browser code execution and comes with video tutorials. You can easily switch between the various modes depending on your liking. There’s also the option to switch from reading a course on your desktop to your mobile phone, while you are on the move. The interface is gamified and lets users collect points called “XP”-s if they successfully complete exercises without receiving hints or seeing the answers. The courses are non-graded and provide a statement of completion at the end that can be shared on the LinkedIn profile of a user. Note that the content of the courses is not novel in any sense. You can always google any topic for free-content and setup a Python IDE on your local machine for hands-on practice. The real value DataCamp delivers is in streamlining your learning experience through a one-stop portal. You don’t have to bother yourself with installing a pandora box of tools and dependencies on your local machine and open a dozen chrome tabs for each online blogs you find on Google for a given topic. At the same time, DataCamp falls short of the credibility and rigorousness of a university degree in Data Science rather it teaches practical, and industry focused tools and skills. Most of the Ed-tech companies side-step dry theory and head-spinning maths that accompanies data science and DataCamp is no different. One notable drawback DataCamp suffers from is the absence of interview preparation content. We expect majority of DataCamp’s readership to be students, budding technologists, and folks switching to computer science realm from other professions. The end for these users is a well-paid job and data science is a means to that end. DataCamp has baked the cake but is missing the icing. It would be ideal if DataCamp introduced courses similar to the ones offered by Educative that focus solely on interview preparation for Data Science jobs. Some of Educative’s competing offerings are: All in all, if you are a motivated self-learner or someone mulling changing careers, DataCamp is a good start, especially for the latter who want to spend thousands of dollars enrolling in a university-accredited data science program. You can start with DataCamp and have a firsthand experience of what it feels like to work in the data science realm, much like a litmus test to ascertain your aptitude for the field before plowing in serious dollars.
https://medium.com/double-pointer/review-of-datacamp-learning-skills-for-the-future-of-work-3dfafd012212
['Double Pointer']
2020-12-09 21:09:59.193000+00:00
['Technology', 'Interview', 'Data Science', 'Tech', 'Data Scientist']
672
3 Programmers Got Fired (Including Me) Due to a Single App Crash
3 Programmers Got Fired (Including Me) Due to a Single App Crash Photo by Lucian Alexe on Unsplash I am now doing my third job. In my previous two jobs, one time I got fired and another time I resigned. But getting fired was a horrible experience for me. I cried the whole day. I never told anyone about that — my friends, my new colleagues. I felt so ashamed and humiliated that I made up some lie for all about why I left the last job. I couldn’t even tell my parents because they would get very upset. I only shared it with my boyfriend. He was so supportive and helped me to get a new and better job. Let’s get into the story. Problems of a low-funded startup I worked in a startup aged only one year. There were a total of four partners. They only had one angel investor and were looking for more investors. They mainly made enterprise solutions. You know startups have many problems. One of the main problems is the funding problem. In the beginning, a startup has to do a lot of work but doesn’t have enough resources. It pays less but expects a lot of output from the developers. I joined there in February 2019. After three months, I got promoted from intern to junior developer. In the internship period, I got only $100/month. I had no regrets about that because I needed job experience. There was a total of five programmers. All of us had to do a lot of work. We had to do overtime at least four out of six days a week. But the company didn’t pay us for the overtime. They never even said thanks. They acted like we were supposed to work overtime. This is a red flag for developers. I would suggest to all the developers that if you find that your company always pushes you to work overtime without extra benefits, make a plan to change your job. Because the scenario probably won’t change. Arrogant CTO gave us more tasks than we could do The CTO gave us all the tasks for a whole week. He didn’t care how fast or how slowly we did them. But he always gave us tons of work that would be very hard for even senior programmers to finish in a week. We had no senior programmers. All of us were junior programmers. There was no tester, no designer. We had to work a minimum of nine or ten hours a day, 54–60 hours/week. If you didn’t work, you would lose your job. If anyone couldn’t do all the tasks within time, he would humiliate them in front of all the other developers. He was one of the partners, so we couldn’t complain about him to a higher authority. If anyone came to the office five minutes late, he cut their pay to a half day’s salary. But no one got extra money when they had to work for one or two hours extra. I missed two interviews because I couldn’t manage time to attend them and I was not in a position to take the risk of losing this job. App crashed Then one day the CEO took on a new project that needed to be done within one and a half months: a mobile app and a web platform for building a customized delivery platform. The initial target was to build a prototype to show a potential investor in order to raise money. The CTO told us that it’s very hard to get an appointment with that investor, so we had to build it within one and a half months. One and a half months would be a very tight schedule for any team. We were very depressed when we heard we must have it made within that short time. We knew we all would have to do a lot of overtime. They chose three developers for this project, including me. One was a backend developer, one was a Flutter mobile developer, and the other one was a frontend web developer (me). But we still did it within time. Of course, there were bugs. We told this to both the CTO and the CEO. They seemed bothered but didn’t say anything at first. The app crashed on mobile when it was being demonstrated in front of investors. It crashed because of a text field. The requirement of that text field was for numbers, but the CEO gave numbers and characters. For fast development, we used Firebase’s Cloud Firestore to keep the data. When the user pushed string data instead of number data in Firestore from the mobile frontend side, the app crashed. The investment was rejected, and we were the scapegoats Investors rejected the investment. In my judgment, of course, the first fault was the CTO’s. He should never have taken this project on in this very short timeline. The second fault was the CEO’s. He didn’t even try the app once before presenting it to the investors. He should have taken proper preparation because every app has bugs. And if you develop a project within one and a half months without testing, it will have many. Today’s young entrepreneurs might have many advantages but one major problem. That’s a lack of experience. If they had more, the working environment in a startup would be more productive. However, after getting rejected by the investors, the CEO wanted a clear explanation from the CTO. As usual, he didn’t say that the timeline was the problem. The CTO said we were the problem. So we were the scapegoats. ☹ Two months’ salary and we are gone I was involved in front-end web development. I had no engagement in the mobile development side. Still, they fired me. They said the design was not good. I am not denying that. But I am not a designer, and they should have considered that. According to the job agreement, the company had to notify us two months before firing us. As the CEO got too upset, he fired us immediately with two months salary. I am thankful to that company because I had a two-month gap. In the meantime, I had applied to seven companies and found a job. But I will never forget the shame. I know maybe I shouldn’t feel that way. But still, that memory haunts me down. Last few words No one is perfect. Employers, please try to understand that. I am not saying we were the best programmers, but you shouldn’t demand that much output with poor management, an inexperienced CTO, and low-paid junior developers. Not all management is the same. I have faced and heard some great CEOs and CTOs. Somehow this was my worst job experience, and one of my worst life experiences. And I will say to all the developers to please don’t do this kind of job where you have no respect, no value, and a lot of pressure. If I had stayed there six more months, I would have gotten two years behind in my career.
https://betterprogramming.pub/3-programmers-got-fired-including-me-due-to-a-single-app-crash-35d4c94555da
['The Woman']
2021-07-09 14:09:00.871000+00:00
['Careers', 'Programming', 'Software Development', 'Technology', 'Startup']
673
We work on your iPhone and Samsung screen replacement in Auckland
The damaged and cracked screen issues are the most common nowadays. People often face a lot of trouble regarding their cell phone breakage and screen crack problem. The most popular brands in mobile phones are Samsung and Apple. The shops who deal with several mobile accessories in Auckland also work in mobile phone repair service. We hardly use our phone for too long. Accidently we all face the issue of our mobile phone broken screen. Also, we can’t live without our phone for a single minute. As the digital era is becoming closer we all face some difficulty related to our mobile phone and their fast-evolving development. The technology is getting advance and we are getting more advanced with our new modern gadgets. Always to the trustable and reliable one. How services of the mobile phone got changed? There were days when mobile was severe and extreme to get a fix. The common problem the mobile repair expert work on is, a difference in battery or other programming issues. Yet, as opportunity changed their arrived an upheaval of advanced mobile phones. These smooth contraptions are made waterproof however their screens and cameras are the best resources of the gadget. Regardless of what sort of screen protector or sturdy spread you are using practically 75% harm is taken by the versatile screen and glass parts when the telephone fell or meets with a mishap. How to make your phone repaired? The Samsung screen repairs are settled either by the company with focuses or versatile fix shops. One needed to leave the gadget for a couple of days to get it fixed. Be that as it may, in the present time, it is extremely difficult to envision living without methods for correspondence. It is difficult to deal with the day by day schedule without the mobiles. Subsequently, sites like Omni Tech are created to make life simpler. It is a help for the individuals who are looking for simple touch screen fix. The site is offering a straightforward strategy to fix the harmed screens at your doorstep. You can without much of a stretch visit the site Omni Tech and look down to get the fix alternatives. Select the brand, model, and shade of your telephone and request a statement, especially iPhone screen replacement. Book an arrangement and leave the rest to the specialist who will visit on the named day to make the necessary substitutions. Without a doubt, the system is simple and won’t take quite a bit of your time. The portable screen fix charges are ostensible and professionals are friendly as well. You don’t need to get worried: Need not to stress over the validity of the supplanted screens. Every single embellishment use by us is fixed and accompanies a guarantee for seven days. In the interim, you can peep through and watch out for the whole fixing system. To broaden your trust and keep up the unwavering quality the organization likewise offers a 6-month ensure on the screen fixes. We also work in doing unlock iPhone as well as with other brands. You can undoubtedly speak with the experts and with the assistance work area; they are accessible all day, every day. On the off chance that anything happens to the fixed part, they have a 7-day cash back advantage. Contemplating the portable screen fix is presently basic and bother free with Omni Tech.
https://medium.com/@omnitechstorenz/we-work-on-your-iphone-and-samsung-screen-replacement-in-auckland-6335fd09fffa
['Omnitech Limited']
2019-12-07 06:25:34.985000+00:00
['Unlock Iphone', 'Mobile Accessories', 'Samsung Screen Repair', 'Technology']
674
With Blockchain’s real estate development, we are all quite accustomed to all the hidden perks of…
How to Shift to Coding for Decentralized Systems? With Blockchain’s real estate development, we are all quite accustomed to all the hidden perks of centralized systems. Decentralized systems, on the other hand, offer a whole new world of possibilities. If you are looking forward to shifting your coding for decentralized systems, there are several aspects to look for. There is no involvement of any third-party, which certainly promotes transparency above anything else. Neither less to say, all the transactions can easily be seen through the user’s public domain. Having a clear understanding of the STO Development Services is very crucial. We cannot precisely say how we entered the digital world, but there is no turning back. In fact, Real estate blockchain development offers a crypto exchange option if you have a real estate property that needs to be taken care of. The need for creating a secure and advanced platform is absolutely necessary for a decentralized digital ecosphere. Blockchain is one of the trusted sources which is helping to build a financial system that depends on cryptocurrency. As we all are aware, the year 2020 has been one of the toughest years due to pandemic and followed by an economic crisis. Despite all the odds, Real estate blockchain development has bright future plans for its users. If we look into the global economy, then real estate plays a vital role. Not only for the fact that it generates trillions of dollars, but with time, there has been a downfall in terms of transactions. Cleary, the transparency between the different bodies maintaining the financial aspects has not shown positive progress. As a result, here are some of the benefits offered by Blockchain’s real estate development. Allowing tokens for transactions in regards to assets concerning real estate. With an automated process in the real estate area, the cost can be reduced to a certain extent. The user can now access global assets and buy and sell it efficiently. With the trans[arency in the data and as a user, when you can get your hand on the information, it further helps you to make better financial decisions and look for a profitable portfolio. How does the Security Token Offering Development help an investor? To simplify it, these tokens are assets that can be compared to bonds, share stock, warrants, and lastly, bonds. The Security Token Offering Development team is entirely responsible for guiding the investor and increasing the chances for earning profit. STO stands for Security Token Offering, which is a term used for digital or cryptographic exchange. To deep dive into the topic further, STO Development Services carefully helps to secure the token, and the technical team can create a dashboard for better understanding the fund and investors’ interest. The need for STO services ensures the digital transaction is safe, and one can continue with the trading. Here are a few examples of how Blockchain’s real estate development transformed the real estate industry. One needs to keep in mind as they enter the real estate market guidance from an experienced person, in-depth knowledge, and lastly, executing the idea with help from professionals from the real estate field. Managing Assets and real estate funds. Financing projects in real-time. Accounting details being shared in real-time. Guide to investors before investing in any particular project. Last but not least, managing the property. Planning of the real estate only after consultation with the group of people designing urban societies. Does Blockchain’s real estate development help in urban planning? Whenever there is an urban community, the initial work of real estate development starts with help from people staying in such a community. Often those individuals feel dejected for not partaking in the planning procedure. But with the help from the blockchains planning program, it can educate the people. In return, they can earn incentives with active participation with a few educational resources. This ensures that the community of people is happy, thereby improving confidence in the public. As well, it helps the community to thrive together on prosperity by achieving sustainable success. The Bottom line Lastly, one needs to educate oneself before entering the cryptocurrency territory. Blockchain is one of the leading bitcoin explorer services. Since this is a comparatively new area, people need to gather as much knowledge as they can before doing it right. Hence the need for various teams to make sure the investment bears fruit with passing the time; one such important part is the STO Development Services and STO Solutions, which will help you make better decisions with your investments.
https://medium.com/@codezeros/with-blockchains-real-estate-development-we-are-all-quite-accustomed-to-all-the-hidden-perks-of-776dd5d6f362
[]
2020-12-04 13:23:19.277000+00:00
['Blockchain Startup', 'Blockchain', 'Blockchain Technology', 'Blockchain Application', 'Blockchain Development']
675
What Makes Social Media, Social Media?
What Makes Social Media, Social Media? In short — People and communication. Photo by ROBIN WORRALL on Unsplash Social media is 90% social, 10% media. These technology platforms allow for infinite power and reach. Our work can reach millions within a fraction of a second after hitting the “Post” button. More people can access our work concurrently, and more than ever before. As much as I see the technological prowess which connects millions at any particular point in time, many see the good and the dark side of social media. Let me stick to the technological layer for a bit before I touch on the social element. These web and mobile applications have a wider outreach compared to any physical platforms we knew. Television, billboards, and posters have one thing in common. They cannot move. Smart devices can move. Through these physical proxies we carry with us like our wallets, the social media platforms follow us wherever we go. Getting our attention is one vibration away. Visual assaults or on-air commercials could only be envious of their younger but more powerful step siblings. Now, leaving the technology layer aside. The social element makes up the social media we know today. If we could examine all those platforms we are involved in, we can find a common thread. They are designed to facilitate and exaggerate communication. Like a trendy drama, the main cast is ridiculously handsome, rich beyond means, taller than an NBA player, gyms, have eyes like a hawk, and can run like a cheetah. When we engage in social media, we express our love equivalent to the magnitude of an earthquake. We comment to approval, like, share, and funneling in others based on tagging on that same post. That’s social media. The dark side of social media is manifested in communication as well. I think an influencer can bury those they dislike. This is nothing new. In a corporate setting, those on top could end the career of a certain someone at the operational level. In that sense, humans are humans. We carry the same behavior wherever we go. What social media actually does is to extend the reach of our behavioral and communication patterns beyond our physical boundaries. We can send our love to a child in Siberia instantaneously. We can also continue to destroy that person we hate no matter where they escape to. Zambia or Columbia doesn’t matter. The throttling continues so long as social media accounts are alive. This is something that we have to adapt to in the 21st century. Social media will continue to evolve, and we have to learn to deal with it. That is the same with dealing with difficult people, annoying friends, and seemingly scheming adversaries. In that sense, social media is as normal as it is novel. This is the social media we know today. Social Media Is A Reflection Of Social Element, Aldric
https://medium.com/the-innovation/what-makes-social-media-social-media-cc6bb12576e0
['Aldric Chen']
2020-12-05 11:02:26.457000+00:00
['Communication', 'People', 'Personal Development', 'Technology', 'Social Media']
676
Integrating Tech and Learning: How to Empower Your Students When Learning is Remote
Integrating Tech and Learning: How to Empower Your Students When Learning is Remote Laura Head Follow Oct 23 · 3 min read Remote learning means a lot of potential tech noise that interrupts the flow of a lesson. “I can’t find the document!” “How do I zoom in?” and “Where’s the ‘new tab’ button?” are all worries that have come from my kids, and can add up to a lot of time lost on content. So before diving deep into a unit on algebra II or writing dynamic characters, I present a tutorial to walk the group through the hardware and software they’ll frequently use. Doing so quells tech woes before they begin, and keeps our lessons moving smoothly. I’m a big fan of the Google platform — It’s easy, accessible, and done right, scaffolds considerable autonomy to students. Starting at Gmail, I show my younger students how to check for new mail (from their favorite teacher!) and draft a new email. All Google services are linked at a common hub; the corner of the page is where they can pull up the Calendar, their Drive, or any other tool they might need. In the Calendar, I draw their attention to their weekly class invite from me, and we practice hovering over the event to find the video link — Just one click away from joining the regular class call. Drive, as I’ve described to my students, is sort of their virtual version of a school desk. I have a folder for each subject area, subdivided by month (or semester, or unit, or…), and here I can find the document labeled with today’s date and lesson topic. Once we’re in the doc, I like to show my students that each colored icon in the top right corner is actually clickable — tap it, and I can see where in the document all of my classmates are. I can zoom in and out, change my font size, and even pull up old versions of the doc if I want to see previous revisions. In true “I do-we do-you do” fashion, I first share my screen to give a visual of all the software operations. I then ask a student volunteer to share their screen, so that we can repeat the tips and tricks as a group, and finally give students time to practice on their own accounts while I stay on to answer questions and troubleshoot issues that come up. Zoom, SeeSaw, Jupiter, even the touchpad and computer keyboard — They all work uniquely, and students can benefit from learning how to use each one upfront. Remember: A stitch in time saves nine, and a quick tech tutorial before everything else saves a teacher from pulling out all her hair. The added benefit of your crash courses in tech? You’ve gifted your students new skills that they’ll be able to use independently across Minecraft games and history essays for years to come, long after they’ve graduated from Heads Up. Consider this just another tool in their toolbox towards helping them become resilient and autonomous learners.
https://medium.com/age-of-awareness/integrating-tech-and-learning-fa6509f3208f
['Laura Head']
2020-11-18 03:58:42.326000+00:00
['Tutoring', 'Remote Learning', 'Online Learning', 'Integrating Technology', 'Student Empowerment']
677
The Pioneers of Programmable Money, Introducing 0x
If you give someone $50 and instruct them to only use it to buy healthy food. They can choose to ignore the instruction and lie to you. You won’t even know about it. In our day to day transactions, we expect people to satisfy our instructions. When they break those instructions, you are likely to face a loss. What if we can strictly enforce instructions in our transactions? Ethereum is a cryptocurrency that provides you with a system to facilitate transactions only when a given set of instructions are satisfied. The system is transparent because it keeps a log of the decision making process. If you are funding a non-profit, you can provide an instruction to release funds only if the organization reach the target of sheltering a thousand people. The beauty is you can have more instructions, you can also instruct to release half the funds if 500 people are sheltered. Hence, we have the ability to program money that can move at the speed of computers. Currently, money moves according to the speed of people involved. Let’s take startup fundraising as an example. After you convince an angel investor or a venture capitalist, it would take three to four months for the process to complete and then money is transferred. On Ethereum you can raise investments in few hours. You can instruct the Ethereum computer through a smart contract to issue the investor a token when he transfers money. The investor will be able to redeem the token for a product or service your company provides in the future. Presently there are more than 18,000 tokens on the Ethereum platform which are worth several billion dollars. People will buy your tokens for two reasons. They want to redeem your service on a future date. They believe in your idea and expect its price to increase in the future and they can exchange it for profit. Your customers are likely to hold tokens of other companies. If they are frequent traders they are likely to keep them in exchanges. Keeping tokens in centeralised exchanges is a bad idea. The following is a list of attacks on centeralised exchanges. If your tokens are stolen, your company can be at risk. If you have instructed your Ethereum contract to follow instructions of your majority stakeholders, hackers will dictate your company’s terms. Centeralised exchanges are closed systems. An open system like Ethereum is easily extendable since anyone is allowed to interface with it to build applications. In a closed exchange, the company can choose to extend only certain functionalities to a certain set of people. Hence, it’s hard to build autonomous applications on closed systems. Now, how do we solve the problem of security and closed system? Decentralised Exchanges In a decentralized exchange, trading happens directly between people’s wallets. The decenteralised exchange doesn’t hold their user’s assets. Traders create an order at the price in which they want to buy or sell and pre-authorises a transaction. The order is broadcasted through out the network, traders who are willing to counter the order fills it. If Sara wants to exchange BAT for ETH, she specifies the price at which wants to sell it and create an order in a decentralized exchange. The decentralised exchange pings her wallet to take her approval in order to pre-authorise the transaction. After her approval, the order is visible to other traders who can choose to fill the trade. Mark chooses to fill the trade, he authorizes the transaction through his wallet. The trade happens, Sara’s BAT token get transferred to Mark’s wallet and his ETH is sent to her wallet. The trade didn’t require a middleman to hold tokens. Key Features of Decentralised Exchanges The security risk is minimised because no assets are held. Trading happens faster because the time taken to transfer assets to exchanges is eliminated. Governments cannot regulate or ban it since the Ethereum system approves only decisions given by the majority stakeholders. The cost involved in maintaining a decentralized exchange is drastically less when compared to managing centralized ones. The trading fee is comparatively less because the infrastructure costs are optimised and the withdrawal fee is eliminated. Decentralised exchanges are also extendible since it supports application integration. It is an open platform and anyone can build relayers/services on top of it. Introducing 0x Open frameworks played an important role on the internet we see today. If you want to build a web app from scratch you have to do a lot of things yourself (listening to requests from port 80, writing a wrapper to interact with your database, etc). But, people hardly code from scratch. Instead, they use MVP frameworks like Laravel, CodeIgniter, Django, Flask, etc. You can reuse their code which was refined over time by the community. This process can save you time and help you take your ideas to market faster. Now back to programmable money! Building autonomous systems that can facilitate transactions on Ethereum from scratch is hard and time-consuming. Your code needs to be optimised because you have to pay for the computational power utilised. 0x is a protocol that takes care of the core architecture. Hence you can focus on the core business logic and worry less about the architecture. 0x smart contracts are rent free and open source. It is backed by a strong community who continuously refine and optimise its code. You can use their APIs to easily program money or even build a decentralised exchange/relayer. We already have a lot of projects built on top of 0x. Radar Relay, ERC Dex, DDEX, Paradex (Acquired by Coinbase) are the popular relayers built on 0x. They are called relayers because they use 0x’s infrastructure to enable trading. These relayers can charge its users a fee for its services. This is a win-win situation for both 0x and relayers because they will individually market and compete to build quality services on top of 0x. When relayers attract more users, 0x will start growing as an open platform. This would also enhance 0x architecture since its open source and contributed by the community. Relayers thus don’t have to worry about maintenance and system upgrades, they can focus on the core business model.
https://medium.com/onchainlabs/the-pioneers-of-programmable-money-introducing-0x-1940d3454510
['Febin John James']
2018-06-22 10:28:07.376000+00:00
['Token', 'Cryptocurrency', 'Blockchain', 'Innovation', 'Technology']
678
Top Medical Device Consulting Companies
Top Medical Device Consulting Companies Medical equipment manufacturing is opening new doors for expansion while addressing several medical professional needs across the world. This year’s medical device industry trends reflect the relentless pace of transformation, change, and growth impacting nearly every aspect of the industry. Whether it’s the growing specialization in sustainability, the continued growth of products, the increase of ingestible devices, or regulatory changes, medical device makers are gearing up to navigate a dizzying array of opportunities and challenges facing the industry. With an increase in the number of chronic diseases, an aging global population, gastrointestinal disorders, carcinoma diagnoses globally, and therefore the shift toward preventative strategies in healthcare to tackle disease progression, ingestible medical devices is witnessing an eternal growth trajectory. At the same time, there’s also a surge in VR-based medical equipment immediately, resulting in a rise within the industry’s investment in reformed versions of VR devices to fuel further growth. According to several other market reports, the inflow for revenue for medical devices in 2020 will cross the previous year’s numbers. Alongside, the FDA has recently uncovered a faster pathway for the approval of medical devices and has been working to solidify their compliance strategies. At this juncture, there’s a significant kind of medical device manufacturing service/consulting companies entering the landscape with a collection of advanced and insightful offerings. MedTech Outlook has compiled this edition to assist the medical device manufacturing fraternity in strengthening its infrastructure and simultaneously enabling growth. The list comprises prominent consulting companies within the industry that solve medical device manufacturing challenges. Besides, the magazine also includes insights from thought leaders within the sector on the industry trends, best practices, recent innovations, and their advice for aspiring CIOs and CXOs. Top Medical Device Consulting Companies A consulting firm that marries knowledge of the medical device market and corporate culture in the US, the latest FDA regulations, and the clients’ unique needs to help Japanese medical device companies thrive in the US market globizz.net Hull Associates offers strategic market access and reimbursement consulting to enable pharmaceutical, medical device, and diagnostics organizations develop targeted strategies for their products such that they are recognized by payment systems and readily adopted by healthcare providers. Hull Associates has served over 350 companies across all disease areas, and among 22 major global markets. Since its inception, Hull Associates has worked on almost every major innovative device produced in the medical industry, such as the percutaneous heart valves, robotic surgical systems, and the latest in vitro diagnostics hullassociates.com Assists medical device manufacturing startups with every phase of their product development process. The company adopts a unique, phased approach to structure the product development process in a financially viable manner. It starts with the close integration of Ingenarious’ team with that of the clients’ to understand their vision and requirements. There upon, the company works to identify the most crucial needs in materializing their ideas and vision. With this phased approach, the company offers startups end-to-end visibility about the scope of product development while reducing the risks in making financial and business decisions. Furthermore, in several instances, Ingenarious has even gone the extra mile supporting startups in applying for grants to raise capital for product development. www.ingenarious.com Irwin & Associates specializes in diverse areas, such as project management, quality system creation, user documentation, quality engineering, verification & validation, risk management, design control, R&D to production transfers, process development, biocompatibility, and microbiology/sterilization validation and more. The team assists the company in setting up a document control system, a training program, a quality manual, and the appropriate procedures required to get them off the ground to conduct their initial studies and launch their products. For larger organizations, Irwin & Associates provides design control, process development, project management, regulatory, and technical writing to propel their projects forward. www.irwinassociatesinc.com JALEX Medical offers product development, design engineering, regulatory affairs, and quality management solutions along with the expertise of its biomedical engineers to support clients in the medical device industry through every stage of the development process.. The company provides step-by-step directives to startups to take their product from ideation to fruition. It works on the process of 3D CAD modeling, finalizing the materials and the manufacturing process, and then prototyping the product for client approval. It further works on the verification and validation process, after which it helps clients acquire clearance on their products to go to market. JALEX Medical works as an extension of the clients’ engineering and regulatory department while supporting them throughout the roll-out process www.jalexmedical.com Lachman Consultants provides expert compliance, regulatory affairs, and technical consulting services for the pharmaceutical and allied health industries around the world lachmanconsultants.com Paladin Medical®, Inc. offers expertise based on over 40 years of experience in the medical device industry to help products comply with FDA regulations and enter the U.S. market with the least difficulty. The company specializes in a full range of medical product regulatory, clinical, and technical contract services for FDA and international premarket applications as well as support to regulatory compliance for the medical device regulations. Paladin Medical®, Inc. stays on the cutting edge of both biomedical engineering of new medical products and the regulatory science that goes with getting those products to the marketplace. The company typically caters to start-up device manufacturers or the established companies with emerging new technologies paladinmedical.com RJR Consulting helps its clients expand their global geographical presence. It does so by providing them regulatory supportand assisting them in developing a new market in the targeted country, without the need for them to set up an office locally. www.rjrconsulting.com Strategy Inc. delivers investment and market strategy due diligence for changing the standard of care medical technology. Founded in 2000, a team of proven medical device and healthcare market research experts combine in-depth primary research findings with traditional and innovative analytical tools to provide the probability of successful commercialization. Entities with a platform technology where prioritization of the indication for use with the highest probability to succeed with the maximum return is a specialty. Globally focused, 46% of business is international, providing services to foreign entities seeking to US launch their technology. Clients include emerging companies, financiers and enterprise companies with technologies in all stages of product lifecycle www.strategyinc.net Covance Covance Inc., the drug development business of LabCorp®, is the world’s most comprehensive drug development company, dedicated to advancing healthcare and delivering Solutions Made Real®. Our unique perspectives, built from decades of scientific expertise and precision delivery of the largest volume of drug development data in the world, along with our innovative technology solutions, help our clients identify new approaches and anticipate tomorrow’s challenges as they evolve. Together with our clients, Covance transforms today’s healthcare challenges into tomorrow’s solutions. We also offer laboratory testing services to the chemical/agrochemical industries and are a market leader in toxicology services, central laboratory services, discovery services and a top global provider of Phase III clinical trial management services
https://medium.com/@healthcare-tech-outlook/top-medical-device-consulting-companies-6b8a9c718082
['Healthcare Tech Outlook']
2021-07-06 02:09:50.818000+00:00
['Tech', 'Medical Devices', 'Medtech', 'Technology']
679
My Favorite Pieces of Syntax in 8 Different Programming Languages
We love to criticize programming languages. We also love to quote Bjarne Stroustrup: “There are only two kinds of languages: the ones people complain about and the ones nobody uses.” So today I decided to flip things around and talk about pieces of syntax that I appreciate in some of the various programming languages I have used. This is by no means an objective compilation and is meant to be a fun quick read. I must also note that I’m far from proficient in most of these languages. Focusing on that many languages would probably be very counter-productive. Nevertheless, I’ve at least dabbled with all of them. And so, here’s my list: List Comprehension def squares(limit): return [num*num for num in range(0, limit)] Python syntax has a lot of gems one could pick from, but list comprehension is just something from heaven. It’s fast, it’s concise, and it’s actually quite readable. Plus it lets you solve Leetcode problems with one-liners. Absolute beauty. Spread Operator let nums1 = [1,2,3] let nums2 = [4,5,6] let nums = [...nums1, ...nums2] Introduced with ES6, the JavaScript spread operator is just so versatile and clean that it had to be on this list. Want to concatenate arrays? Check. let nums = [...nums1, ...nums2] Want to copy/unpack an array? Check. let nums = [...nums1] Want to append multiple items? Check. nums = [...nums, 6, 7, 8, 9, 10] And there are many other uses for it that I won’t mention here. In short, it’s neat and useful, so that earns my JS syntax prize. Goroutines go doSomething() Goroutines are lightweight threads in Go. And to create one, all you need to do is add go in front of a function call. I feel like concurrency has never been so simple. Here’s a quick example for those not familiar with it. The following snippet: fmt.Print("Hello") go func() { doSomethingSlow() fmt.Print("world!") }() fmt.Print(" there ") Prints: Hello there world! By adding go in front of the call to the closure (anonymous function) we make sure that it it is non-blocking. Very cool stuff indeed! Case & Underscore Indifference proc my_func(s: string) = echo s myFunc("hello") Nim is, according to their website, a statically typed compiled systems programming language. And, according to me, you probably never heard of it. If you haven’t heard of Nim, I encourage you to check it out, because it’s actually a really cool language. In fact, some people even claim it could work well as Python substitute. Either way, while the example above doesn’t show it too much, Nim’s syntax is often very similar to Python’s. As such, this example is not actually what I think is necessarily the best piece of syntax in Nim, since I would probably pick something inherited from Python, but rather something that I find quite interesting. I have very little experience with Nim, but one of the first things I learned is that it is case and underscore-insensitive (except for the first character). Thus, HelloWorld and helloWorld are different, but helloWorld , helloworld , and hello_world are all the same. At first I thought this could be problematic, but the Docs explains that this is helpful when using libraries that made use of a different style to yours, for example. Since your own code should be consistent with itself, you most likely wouldn’t use camelCase and snake_case together anyway. However, this could be useful if, for instance, you want to port a library and keep the same names for the methods while being able to make use of your own style to call them. In-line Assembly function getTokenAddress() external view returns(address) { address token; assembly { token := sload(0xffffffffffffffffffffffffffffffffffffffff) } return token } Solidity is the main language for writing smart contracts on the Ethereum blockchain. A big part of writing smart contracts is optimizing the code, since every operation on the Ethereum blockchain has an associated cost. As such, I find the ability to add in-line Solidity assembly right there with your code extremely powerful, as it lets you get a little closer to the Ethereum Virtual Machine for optimizations where necessary. I also think it fits in very nicely within the assembly block. And, last but not least, it makes proxies possible, which is just awesome. For-Each Loop for (int num : nums) { doSomething(num); } In a language generally considered verbose, the for-each loop in Java is a breath of fresh air. I think it looks pretty clean and is quite readable (although not quite Python num in nums readable). Macros #define MAX_VALUE 10 I got introduced to C-style macros when building my first Arduino project and for a while had no clue exactly what they did. Nowadays, I have a better idea of what macros are and am quite happy with the way they are declared in C. Not hating on C by any means, but, like Java, there’s little about the actual syntax that stands out, so these last two ones are a little meh, unfortunately. ‘using namespace’ using namespace std; Sorry :( And that’s it! So, what are your favorite pieces of syntax?
https://medium.com/swlh/my-favorite-pieces-of-syntax-in-8-different-programming-languages-ba37b64fc232
['Yakko Majuri']
2020-09-26 22:28:17.954000+00:00
['Programming', 'Software Development', 'Technology', 'Software Engineering', 'JavaScript']
680
The Top 10 File Handling Techniques in Python
8. Read Files One of the most important file operations we do is certainly reading the data from the files. After all, the content within the file is probably the only reason that the file was created in the first place. To read a file, the most conventional way is to create a file object using the built-in open() function. By default, the function will open the file in the read mode and treat the data in the file as texts. Let’s see an example. Read File The above code shows you the most common ways to read the content of a file. If you know that your file doesn’t have too much data, you can simply read it all at once with the read() method. However, when the file is enormously large, you should consider using a generator that allows you to process the data row by row. Specifically, the file object works as a generator by rendering the data row by row, which saves memory by eliminating the need of loading all the data when we use the read() method. If you want to learn more about generators, you can refer to this article, which covers the use of generators in file reading.
https://medium.com/better-programming/the-top-10-file-handling-techniques-in-python-cf2330a16e7
['Yong Cui']
2020-08-05 19:16:04.640000+00:00
['Artificial Intelligence', 'Programming', 'Python', 'Technology', 'Data Science']
681
MiiX•ASCH Hackathon 2019
MiiX, together with 2050 Conference, ASCH Platform, CSDN, etc., invites you to participate in the hackathon in Hangzhou, China! Event: 2050 Conference: MiiX• ASCH Hackathon Organizers: 2050 Conference, MiiX, ASCH Platform Location: Yunqi Town, Hangzhou, China Time: April 26–28, 2019 Competition Theme: Get together to create remarkable dApps and explore the future! What is 2050 Conference? 2050 Conference was initiated by Hangzhou Yunqi Science and Technology Innovation Foundation. It is a conference aimed at young people. It will allow young people to stand in the spotlight, talk about innovation, look towards the future and face the challenges with a young mind. From May 25 to 27, 2018, the first 2050 conference was held in Yunqi, Hangzhou, gathering tens of thousands of young people around the world who embrace science and technology. Dr. Wang Jian, Founder of Alibaba Cloud & Yunqi Science and Technology Innovation Foundation, said that 2050 is a reunion rather than a conference. 2050 allows the young generation to show their responsibility to the world and their promise for the future.
https://medium.com/aschplatform/miix-asch-hackathon-2019-21a0f7e81a2
[]
2019-04-09 14:03:19.135000+00:00
['Blockchain', 'Blockchain Technology', 'Aschnews']
682
How to Initiate Innovative Business Collaboration
How to Initiate Innovative Business Collaboration Creating synergy is one of the main goals for collaboration in transforming business environments. Collaboration is essential to create synergy in diverse teams, especially for transformation initiatives. Many businesses in this economic climate perform substantial transformative activities to stay competitive in the market. However, collaboration sounds like a stale term posing connotations. The term ‘collaboration’ is overused and it loses its significance, especially with the emergence of internet technologies and widespread digital transformation initiatives. For example, some consider social media as collaborative or collaboration tool. In reality, these so-called social media collaboration tools have little to do with the actual business collaboration. There is a tendency to consider social media tools as practical, useful, and highly valuable for business collaboration purposes. However, when we carefully examine these tools, we can see that they are more information-sharing tools rather than actual business collaboration tools. Here is my story on how to spark innovative business collaboration in the workplace. You can find more of these stories on my News Break profile.
https://medium.com/illumination/how-to-initiate-innovative-business-collaboration-448c025d01af
['Dr Mehmet Yildiz']
2020-12-28 16:21:39.071000+00:00
['Business', 'Entrepreneurship', 'Startup', 'Technology', 'Writing']
683
Top 10 Retail Technology Magazines
Top Retail Technology Magazines There are various retail technology magazines across the globe providing technology news, articles, insights on the latest trends in the retail industry. Below is the best magazines and publications in the retail sector. Top Retail Technology Magazines Retail Dive Retail Dive provides in-depth journalism and insight into the most impactful news and trends shaping retail. The newsletters and website cover topics such as brick-and-mortar, retail technology, e-commerce, marketing, payment technology, store operations, omnichannel, and more. Retail Dive is a leading publication operated by Industry Dive. Our business journalists spark ideas and shape agendas for 10+ million decision makers in competitive industries. Check Out — Retail Dive 2. Retail CIO Outlook Retail CIO Outlook Retailers are increasingly recognizing that what has made them successful historically — getting the right product in the right place for the right price — is now the bare minimum. The new battleground is improving people’s experience with digital technology and offering the ease of shopping sitting at home. Retail CIO Outlook is a print medium that talks about various enterprise solutions which can redefine business goals of tomorrow. Following a unique learn-from-peer approach the magazine assists decision makers to stay abreast in the ever changing business scenario where one needs to know about the latest from the best sources. Retail CIO Outlook brings you the latest trends in the industry on the most happening technologies and solution providers. The next wave of intelligent automation solutions — powered by artificial intelligence — will weave technology, big data and people together to transform what retailers do and how they do it. Retail CIO Outlook is like a beacon guiding the organizations towards the correct path in this technologically enhanced era. Retail CIO Outlook aims to be that platform which allows high level executives across industries to share their insights, which in turn will help the technology, business leaders and start-up ecosystem with analysis on information technology trends and provide a better understanding on the role that Retail play in achieving the business goals. Check Out — Retail CIO Outlook 3. Retail Technology Review Retail Technology Review Retail Technology Review is a feature-rich website dedicated to the products and solutions needs of end users within the retail sector. The site’s content includes everything from Mobile Computing, RFID, Printing & Labelling, EPoS systems, Kiosk technology and Surveillance & Security through to current news, informed opinion articles and key events on the retail calendar. Check Out — Retail Technology Review 4. Retail Tech Insights Retail Tech Insights Worldwide retail sales in 2020 are projected to be about 26 trillion dollars, out of which about 4 trillion will be in U.S. Sales. At least in the U.S., retail is under constant assault by Amazon, showing the power of technology to the retail industry. Retailers used to live and die by merchandising picks, location, and prices, now we can add technology as an essential part of the mix too. The retail industry is changing from traditional retail to Omnichannel retail, which involved substantial investments in technology by the retail sector. Retail Tech Insights through its print and digital magazines, websites and newsletters is the trusted source for new trends in technology for retail, new solutions available for retail, challenges being faced by retail executives in adopting technology solutions and bringing out the best of technology vendors providing solutions and services to retail. They offer unbiased curated content from peers of retail executives, acting as a go-to knowledge platform for technology adoption and implementation in the retail industry. Retail Tech Insights magazine content includes everything from Artificial Intelligence, Analytics, Cloud, Merchandising, POS, Kiosks technology, Surveillance &, security, CRM, RFID, SupplyChain, Robots, Drones to current news, informed opinion articles by industry insiders and essential events on the retail calendar. We are always grateful to hear real-life experiences, challenges, and advice of innovative practitioners and leaders in the Retail industry in using technology in their organizations and bring that to our readership. Check Out — Retail Tech Insights 5. Retail IT Insights Retail IT Insights RetailITInsights covers the point of sale (POS) software and hardware, omnichannel retailing, in-store systems & operations, and loss prevention. RetailITInsights is the premier business strategy resource for innovative yet pragmatic technology solutions in the retail industry. Our goal is to help retail executives make informed decisions about technology and operations solutions for every sales channel. Check Out — Retail IT Insights 6. Retail Technology Retail Technology Retail technology — the IT publication for major retailers and brands for 30 years. Check Out — Retail Technology 7. iXtenso iXtenso iXtenso.com is an online magazine that focuses on the retail sector. The variety of iXtenso topics is as multifaceted as “the market” itself. They provide daily information on trends and opinions. Check Out — iXtenso 8. Retail Info Syst News RIS is the leading source for business intelligence and technology insight for retail executives adapting to market forces that are disruptive, transformational and engines for innovation. Check Out — RIS 9. Online Retail Today Online Retail Today Online Retail Today brings together the best content for online retail professionals from the widest variety of industry thought leaders. It is a combined effort of the eTail and Aggregage. The goals of the site and newsletter are to: Collect High Quality Content — The goal of a content community is to provide a high quality destination that highlights the most recent and best content from top quality sources as defined by the community. Provide an Easy to Navigate Site and Newsletter — Our subscribers are often professionals who are not regular readers of the blogs and other sources. They come to the content community to find information on particular topics of interest to them. This links them across to the sources themselves. Be a Jump Off Point — To be clear all our sites/newsletters are only jump off points to the sources of the content. Help Surface Content that Might Not be Found — It’s often hard to find and understand blog content that’s spread across sites. Most of our audience are not regular subscribers to these blogs and other content sources. Check Out — Online Retail Today 10. Retail Tech Innovation Hub Retail Tech Innovation Hub Retail Technology Innovation Hub is a leading newswire and information hub for the global retail technology community. An independent source of expert news, views, analysis and research, Retail Technology Innovation Hub goes beyond the press releases and marketing hype. We regularly talk to retailers, tech suppliers, trade associations, analysts and consultants and tackle all the omnichannel retail topics that matter. Check Out — Retail Tech Innovation Hub
https://medium.com/@jackmathew/top-10-retail-technology-magazines-dafe94ffe300
['Jack Mathew']
2020-12-15 10:46:06.534000+00:00
['Magazine', 'Retail', 'Retail Technology', 'Technews', 'Technology']
684
Social Blockchain Rescue Rangers
On March 21 the scandal about Facebook data happened. Users (surprise, surprise!) found out that funny quiz apps used and analyzed their data. I am not sure it was the same people, I rather believe that those who like tests didn’t pay attention but those who like scandals did. What is the solution? Does Blockchain offers solution to this well-known problem of centralized stored social data? In this article I review just few blockchain projects which addressed this issue. Privacy — our data doesn’t belong us My opinion — if I publish something it doesn’t really belong to me anymore. Is it bad if someone use this data to show me advertisement? I don’t think so. I don’t like too many ads, but it is not a question of targeting — they can spam me with something not related to my personal characteristics too. So I don’t consider this a problem. But someone does and decision from most relevant blockchain is to let the advertisers pay you directly for your data (and continue to show their ads): With Sphere’s decentralized social network, all of your personal browsing and search engine results are privately secured and stored away from the prying eyes of advertisers. And if they want a peek, they have to pay you! So this kind of projects wants businesses to pay users so they watch their ads. Make it more absurd: why not users should be get paid per action? It is almost there: With the new Ovato Coin, users can get rewarded from your everyday purchases and be apart of daily deals from both national and local merchants. Merchants can now save time and market directly to Users. No direct reward to content creators We come to a social network to find some good content. The system looks like this: users come for content, organizations come for users, content-creators come for… what? So there are projects which offer to reward influencers by other users. Sapien: Creators and Curators will receive payouts for content that have created quantifiable value. Users will be able to tip one another for posts and comments Also, many projects want brands to reward users for content and interactions. This solution is less beneficial for users: influencers are getting paid now and it is not a problem for them, average users never get considerable money anyway from their social activity but will annoy others. Businesses can’t reach their audience Forbes thinks the main problem is that some organisations can influence our opinion, but if you ever tried to reach your audience you know it’s not easy. To make your page post visible on Facebook even to subscribers you have to pay. The problem iswhen users see only paid advertisements instead of valuable page content. The ability to analyze target audience is not a problem either (Google Analytics and other platforms can do the same). The problem is to reach the right people as cost-effectively as possible even for small businesses. Here is a complex idea for targeting ads without the middleman: Instead of housing user data in a centralized data store, MAD Network allows users to keep their data completely private by pushing ad decisions to the edges — to the users’ devices. To target desired users, advertisers create machine learning models that they deploy to the MAD Network. Then a personal Artificial Intelligence “AI” Agent on a user’s device runs those models using the consumer’s personally identifiable information (PII), and cryptographically confirms or denies if the user is in the desired cohort. If they are, the agent then pulls in the relevant ads without ever revealing anything about that user. My feed is managed by someone else Someone chooses what I see and I don’t know the rules. People come to social networks to communicate through their personal feed with people, communities, and brands. However, Facebook decides instead of them. The feed became uninteresting and obsolete. I personally have nothing against someone using my data, but I am against a boring feed even with all analytical tools which these platforms have. As a user, I want the controllable, interesting and useful feed which makes me better. I didn’t find any project which addresses this need, though blockchains have potential to solve it. They offer mechanisms for decentralized management, at the same time they can keep data integrity, privacy and availability.
https://medium.com/coinmonks/social-blockchain-rescue-rangers-30d9a02e452c
['Anna Chalova']
2020-10-16 11:19:11.770000+00:00
['Technology', 'ICO', 'Advertising', 'Bitcoin', 'Blockchain']
685
A Complete Guide about Blockchain Implementation
If you have not started to experiment with the blockchain, it simply means you are behind the technology curve. Our comprehensive and step-by-step guide on blockchain implementation will help you understand the best practices to apply blockchain in your business use case. Size of the blockchain technology market worldwide from 2016 to 2021 (in billion U.S. dollars) Image Source: Statista Many organizations are experimenting with blockchain technology due to its potential to bring trust and transparency within any business ecosystem. PwC surveyed 600 global executives and found that 84% of organizations are getting engaged with the blockchain, while only 15% of them have a blockchain project in progress. Steve Davies, a Global Blockchain Leader at PwC and Author of “Blockchain is Here. What’s your Next Move?”, said that blockchain projects have been stalling for various reasons. The barriers to blockchain implementation are uncertain in how blockchain will be regulated legally and how will it bring trust among a number of users. However, Davies said that you could not sit back and wait; you should be experimenting with it as its effect is going to be intense. Gartner also forecasted that the blockchain is going to generate over $3 trillion by 2030 in business value. It means that the one-fifth of global economic infrastructure will run on blockchain-based solutions by that time in the future. You need the right strategy that could allow businesses to implement blockchain projects without coming to a halt. We shall explain the step-by-step blockchain implementation strategy to overcome any uncertainties in blockchain projects. Here’s a Step-by-Step Guide to a Blockchain Implementation Strategy Understand what a Blockchain is Blockchain technology allows records of information to be stored in blocks which are linked to each other via a cryptographic mechanism to build a distributed ledger. Anyone who has access can share or verify the ledger without requiring the expensive third-party verification. Each block has a cryptographic signature that is linked to the previous block in a way that it makes the blockchain tamper-proof once all the blocks are created. Blockchain can be permissioned, placing restrictions on what stakeholders can view and update and also, it can be permissionless, allowing anyone to view or update anything on the network. We have explained the blockchain in brief above because we recommend you should first understand what is blockchain before you plan to implement it in your business. If your business use case requires blockchain or not When you decide to develop a blockchain solution, figure out what problems it can solve and if it is the way to solve them or not. To analyze if you require a blockchain solution or not, you need to understand the entire process inside out. You need to find the bottlenecks in the current business process and once identified, try to examine how blockchain can resolve them. Consider the following questions before you think about blockchain implementation across your business operations: Does your business case require data update from multiple parties? Do intermediaries add complexity to your business case? Do multiple parties exchange data? Do transactions interact? Does your use case require verification? Are business interactions time-sensitive? If you answered “Yes” to four out of six questions, it represents that blockchain could be a great solution to enhance your business efficiency. Select the right blockchain platform Since a variety of blockchain solutions exist, you have to ensure you choose the right blockchain type for your use case. Here is the comparison of best blockchain platforms used by companies for building blockchain based applications
https://medium.com/hackernoon/a-complete-guide-about-blockchain-implementation-6b0ccf9d4dd
[]
2019-07-13 10:11:20.697000+00:00
['Blockchain Technology', 'Blockchain Development', 'Blockchain Startup', 'Blockchain', 'Blockchain Application']
686
7 Fantastic Exterior Upgrades for Model 3
There are a ton of Teslas on the coasts of the US. And because they all look the same, Tesla owners love customizing their cars. Here are some aesthetic and functional accessories to make your Model 3 your own. Efficiency EV owners are obsessed with efficiency. A spoiler reduces drag by “smoothing” the air behind the car. Less drag means higher efficiency. This is one of the reasons why the Model 3 Performance has a carbon fiber spoiler on top of the trunk! For other trims, a spoiler can be easily added. When I added my spoiler to my car, I saw a 3.96% in efficiency. In other words, installing my spoiler added 10 miles of range to my car. My experimental result is inline with Unplugged Performance’s independent aerodynamic study’s results. Spoiler on/off testing results by the author. 90+% of the driving was between 40 to 60 mph, in relatively consistent California weather. Climate control was never turned on. The carbon fiber spoiler from TOPlight¹ looks really nice and is extremely simple to install. The tool-less installation takes less than 10 minutes. Carbon fiber spoiler¹ installed on the car. Photo by TOPlight. Chrome Delete I personally like the look of a “chrome delete” on modern cars. This satin black chrome delete kit¹ looks great. It covers all of the chrome trim around the windows, the side mirrors, side repeater cameras, door handles, and Tesla T logos. I love this kit because it comes with everything that you need to do the install. They have amazing step by step video instructions, and just in case you make a mistake, they give you TWO FULL SETS of trim pieces. (Read as: if you have a friend with a Model 3, you can split the cost of the chrome delete kit). Chrome delete¹ on a Model 3. Photos by Nikola Pro. Mud Flaps Tesla’s paint is relatively soft when compared to other automakers. So it seems to scratch easier. Because the sides of the car wrap “underneath” the car, a lot of dirt and mud gets kicked up onto the doors by the tires. These mud flaps¹ help a lot with this. They also won’t scrape on the ground like the mud flaps directly from Tesla. I like that they’re discreet and don’t impact the visuals of the car. Mud flaps¹ installed on the car. Photo by Tepeng. Making Model 3 Quieter EV drivers hate noise. Wind noise is air howling at you. And tire/road noise is an equally annoying hum. Check out my other article, where I deep dive into how noise is generated and how to mitigate that sound. Summary/TL;DR Carbon fiber spoiler ¹ that increases efficiency & range and also looks really great. ¹ that increases efficiency & range and also looks really great. Satin black chrome delete kit ¹ to remove the chrome around the windows. ¹ to remove the chrome around the windows. Mud flaps¹ to keep your car cleaner and to help protect the fragile paint. Reducing Noise
https://medium.com/@matthewjcheung/7-fantastic-exterior-upgrades-for-model-3-413127bceb9d
['Matthew Cheung']
2021-03-28 19:49:28.345000+00:00
['Cars', 'Automotive', 'Autonomous Cars', 'Technology', 'Tesla']
687
Solutions: Smart Parking (Part 4)
In the next series of articles, BigBang Core will look into a number of different industries that are benefitting from blockchain technology. As always, make sure to follow us on Twitter, Instagram and give us a like on Facebook. You can also join our Telegram Chanel for all the latest news and updates. Photo by Patryk Sikora on Unsplash The parking industry plays a vital role in urban areas. A city needs to have sufficient parking lots for providing space to its resident or visitors for parking cars. A city must fulfill the requirements of the drivers, vehicles are a prime component in transportation, and sufficient parking spaces ensure less traffic congestion. With the increase in the number of vehicles on the road, searching for a parking spot is becoming more and more difficult. The limited number of parking spaces and difficulty in finding a free spot demands a lot of time and effort, but the needs of the parking industry can be met easily through the utilization of blockchain technology. Issues with the Parking Industry In the past few years, finding parking spaces has become a tough task for people. Nowadays, countries are facing severe parking challenges. When parking spaces are scarce, drivers are left with the option of parking in streets or in no-parking zones. Such acts can result in various issues i.e. accidents, road rage, traffic congestion, fines etc. In traditional parking mechanisms, the staff have to determine the identity of every vehicle owner manually to prevent any unauthorized vehicle from entering. This process is resource hungry and consumes a lot of time. Furthermore, recruitment of staff can result in higher costs. Low Parking Efficiency The conventional parking mechanisms demand a workforce on duty which results in lower parking efficiency, arbitrary collection of charges, etc. Parking Data Sovereignty Problems Parking data should remain private to the user in question. If it is somehow stolen by criminals, it can pose a severe threat to municipal security. Information Island There are various types of commercial/public/community parking lots and other systems, resulting in information islands, and their utilization rate is low. Blockchain in the Parking Industry Smart parking can be revolutionized completely with the help of blockchain. A decentralized parking platform can be formed with the use of blockchain to enable vehicle owners to search and reserve free parking spots. It will enable the drivers to pay through applications and even use cryptocurrencies via smart contracts. Furthermore, it will aid the drivers in navigating their parking spots and give live updates regarding traffic. Some other benefits include: Transparent reservation pricing Parking Spots’ digital renting Data-driven fines Transparent parking service Decentralized smart parking will streamline parking, reducing time and effort for drivers. It will let the parking spot providers monetize key areas, and will ultimately reduce traffic issues. It will also allow drivers to find alternate spots in case there is high traffic. Eventually, this will lead to less traffic jams and a cleaner environment. BigBang Core: Revolutionizing the Parking Industry For ensuring information security along with realizing the characteristics of sharing, BigBang Core employs blockchain technology. It ensures the provision of solutions for the flaws in the parking field, including barriers gate systems, cloud payment systems, license plate recognition, access control systems, and other integrated solutions. It is committed to building a comprehensive urban parking management system for achieving the effective operation of parking resources. The following is a summary of what BigBang Core is doing to facilitate the parking industry: Guarantees Data Sovereignty BigBang Core employs consensus and immutability of the blockchain itself for rectifying differences in the data status of all parties, solving the risk of data being copied arbitrarily, and protecting the legal rights of data owners. Ensures Data Security and Reliability For realizing information desensitization and ensuring user privacy and management between different smart parking lots, BigBang Core employs blockchain’s anonymity mechanism and encryption algorithm. Data Analysis for Providing Parking Planning BigBang Core carries out real-time trend monitoring along with fast and accurate data analysis for realizing visual index analysis and oriented mining. It acquires an analysis of the demand for parking spaces to provide a basis for future parking plans. Automated Management The parking time is automatically calculated by the system and allows online billing via an application, eliminating the risk of cash loss. Breaks Information Island Using a distributed ledger and smart contracts, the parking lot realizes networked data sharing and building a smart parking Internet of Things platform. Zhejiang Zhengtu Technology Co. Ltd: BigBang Core provides smart parking solutions for Zhejiang Zhengtu Technology Co. Ltd. Situated in Hangzhou, Zhejiang Zhengtu Technology Co. Ltd. is a prime parking solution provider, they are focused on the provision of parking services and management solutions to customers. BigBang Core ensures the provision of top-notch integrated solutions for the company with the aid of big data processing and blockchain technology. The Future of Parking? BigBang Core ensures information security along with the realization of characteristic of sharing through blockchain technology. It provides smart parking solutions to companies, increasing revenue. There are a plethora of smart parking systems, yet they often fail at fulfilling the needs of their customers. However, BigBang Core is working hard to overcome these issues and provide optimal smart parking solutions.
https://medium.com/@bigbang-core/solutions-smart-parking-part-4-25c548043d8d
['Bigbang Core']
2020-12-11 06:25:46.719000+00:00
['IoT', 'Industry', 'Blockchain', 'Blockchain Technology', 'Business']
688
How I switched careers to become a software engineer in 11 months (and how you can too)
Photo by NESA by Makers on Unsplash Before I decided to move into software engineering, I was a marketer in the tech world. I tried quite a few types of marketing - events, public relations, search engine optimization, content creation, digital advertising, email marketing - but never found a perfect fit. My last company was a personal finance startup with solid brand recognition. Their motto was content is king. Unlike most tech companies, there were a ton of editors and journalists and only a handful of software engineers. A year after I started, the company decided to shake up its strategy. Content was no longer enough. A plethora of new personal finance startups launched mobile apps that year, promising to help consumers track their finances, learn to budget, eliminate student loan debt, and consolidate credit card payments. Not wanting to be left behind, my company began thinning the editorial side of the business while rapidly hiring product folks, engineers, and designers. An inner feeling made me realize then that it was time to switch gears. ⚙️ In this article, I’ll go through how I switched careers to become a software engineer from start to finish. So let’s get started. Step 1: Research immersive programs I began to research immersive classes in software engineering. I liked that App Academy and Hack Reactor both offered free in-person intro classes to help prospective students prepare for their entrance exams. I also heard positive things about Hackbright and have since met a number of talented women who attended their program. Ultimately, Hack Reactor won me over because it offered a rigorous one-month Structured Study Program (SSP) course. The program was designed to transform participants from beginners to Hack Reactor Immersive ready. The curriculum seemed practical. It helped that I knew three acquaintances who had successfully landed software engineering roles after completing the program. Step 2: Coding immersion Once I narrowed my focus to Hack Reactor, I needed to prepare for SSP and the entrance exam. To do so, I completed the Udacity Intro to JavaScript course along with a few other online courses in JavaScript. Between SSP and Hack Reactor’s immersive program, I spent four months coding up to 6 days a week, 12+ hours a day. I sharpened my problem solving skills, improved my understanding of JavaScript, learned front-end and back-end frameworks, and practiced working alongside other engineers. Step 3: Study for the job search with online courses As intense as my experience was at Hack Reactor, it was only the beginning. I had a growing list of concepts that I struggled with during the program. At the top of that list was algorithms and data structures. Software immersive programs are great for teaching you the skills you’ll need on the job as an engineer. Training for job interviews is a bit of a different beast, and mastery of algorithms and data structures is often the key to being offered an on-site. I tried applying to companies that forgo traditional white-boarding, but they’re few and far between. Cracking the Coding Interview is seen as the penultimate resource for learning algorithms. However, it wasn’t the resource I found most useful, personally. Instead, these are the resources I used to prepare for technical interviews and onsites: CodePath - an 8-week course covering all the most frequently asked interview questions from data structures to system design InterviewCake - a guide that explains the most common patterns found in algorithmic thinking LeetCode - endless practice problems Grokking the System Design Interview - explanations of the tradeoffs involved in common system design questions, such as how to design Instagram Never stop learning Step 4: Take advice from experienced engineers I asked a ton of senior engineers within my network for advice on the job search. Everyone was gracious with their time and excited to see new types of talent entering the industry. Here were some of the most helpful pieces of advice: Get a foot in the door: Every engineer has to start somewhere. Many engineers landed at brand name companies after working at tiny no-names. Don’t worry if you don’t find a perfect fit right away. Every engineer has to start somewhere. Many engineers landed at brand name companies after working at tiny no-names. Don’t worry if you don’t find a perfect fit right away. Rewrite your Résumé: If you’re a new engineer, your résumé is likely written in a way that makes you look really junior. Focus on the tradeoffs and technical decisions you made, not what you implemented. If you’re a new engineer, your résumé is likely written in a way that makes you look really junior. Focus on the tradeoffs and technical decisions you made, not what you implemented. Look for mentorship opportunities: Aim for a team with 30+ engineers, because this will teach you best coding practices and provide mentorship opportunities. Otherwise, know who your manager will be and make sure they’re able to help you make technical decisions (young engineering managers are often thrown into the role with limited people or leadership experience). Aim for a team with 30+ engineers, because this will teach you best coding practices and provide mentorship opportunities. Otherwise, know who your manager will be and make sure they’re able to help you make technical decisions (young engineering managers are often thrown into the role with limited people or leadership experience). Work on personal projects: This will demonstrate your enthusiasm for engineering during the job search and give you something unique to talk about in interviews. Step 5: Ignore unhelpful advice from recruiters and others My job search took place in summer 2018. I learned to tune out a lot of well-intentioned but unhelpful suggestions. These came from recruiters, fellow engineers, and concerned friends. Here are some of them: The job market has slowed down for entry-level engineers over the past few years. Mid-sized companies are only hiring for senior positions and have put a hiring freeze on junior candidates. for entry-level engineers over the past few years. Mid-sized companies are only hiring for senior positions and have put a hiring freeze on junior candidates. Not only is the market oversaturated, but the quality of bootcamp graduates has gone down in past years. It will be tough to find a job. You’re a strong candidate but our company doesn’t have the resources to mentor you. Please stay in touch we’d love to interview you again when you have more experience. Good luck getting hired during the summer.You’re competing with all computer science students who have summer internships. Try again in the fall when more positions open up. Good luck getting hired during the fall. Hiring will slow down as companies approach Q4. If you don’t find a job this summer, you’re going to have to wait until next year. Try becoming a product manager or finding an internship. Maybe you can pivot into software engineering when you’re ready. I’m certain many aspiring engineers hear similar types of feedback. The key is learning to tune it out and stay focused, otherwise it’s easy to burn out. Step 6: Create a study plan After Hack Reactor, I spent a lot of time reviewing technical concepts in preparation for tech screens and interviews. Here’s my rough study plan: Study algorithms & data structures. Study system design. Do a hackathon (it won’t teach you engineering best practices, but it’s a fun group experience). Build a personal portfolio (or another project you can talk about). Write down every interview question from every phone screen & onsite. Review the answers you don’t know. Practice with others. Algorithms are more fun when you’re working on them in a small group. (Pramp and CodePath were two ways I found practice partners). Step 7: Build an online presence Make it easy for recruiters to find you. Build robust profiles with screenshots of projects and links to GitHub on the following sites. Feel free to click the links to check out my examples (or connect with me): It’s important to show prospective employers the quality of your work. Photos, videos, links to live projects, well documented READMEs, and clean coding practices make it easier for recruiters to take a chance on you. Step 8: Remember, it’s a numbers game I heard the refrain, “It’s just a numbers game,” frequently from engineers, career coaches, and mentors. Ultimately, here were my numbers: My applications were mostly front-door, with some referrals, some recruiters who contacted me, and some outreach from Hired or AngelList. Knowing the numbers helps you take an analytical approach. For example: 26% of my total applications (cold, warm, referrals) converted to initial phone screens. 51% of my phone screens converted to a tech screen or assignment 28% of my tech screens and assignments converted to an onsite From this, I learned that I was fairly consistent at getting my résumé to pique a recruiter’s interest, successful (with room for improvement) at initial phone conversations, and somewhat weak in demonstrating my technical skills. Analyzing the numbers allowed me to step back from churning out more applications. Instead, I spent extra time brushing up on technical weaknesses, with a goal of improving my conversion rate from tech screen to onsite. Step 9: Master the onsite Once you’re lucky enough to land an onsite or two, there’s still a lot to master. Personally, I found most onsites extremely draining. They lasted anywhere from 2 - 6 hours and ranged widely in topics covered. Some companies forgot to give me any breaks. Because I was being tested on technical knowledge, there was very little small-talk and I was often grilled with questions for hours at a time. One company told me they weren’t moving forward with me because I had struggled on the very last question after hours of successful algorithms. I’m still not sure what they learned from that very last question that they couldn’t have gleaned from the tech screen or the previous few hours, but that feedback stung. Topics covered during my onsites included: Algorithms System design Build an app using the company’s API Depth of knowledge questions about my coding language (JavaScript) Depth of knowledge questions about HTML/CSS Depth of knowledge questions about front-end frameworks Depth of knowledge questions about various databases (SQL/noSQL) Brainteasers (think SAT prep from high school) Clone and explain X GitHub project that you created, what tradeoffs you made, and what you would do differently in the future Give us a 1-hour presentation on any topic of your choice (consider this a red flag, unless your job specifically requires interfacing with customers or pitching your ideas) The variety made it tricky to know what to study. After each tech screen and onsite, I jotted down a robust list of all the questions I was asked in each interview. This became my study guide for future onsites. When I missed questions, I tried to view it as a learning opportunity. Step 10: Bring snacks Maybe it’s just me, but answering technical questions amidst a revolving circle of new people makes me famished. In my first onsites, I got progressively worse at answering questions as my blood sugar dropped. No surprise - these didn’t result in an offer. In my third, they scheduled me from 10 - 2pm with no lunch break, so I specifically asked for one. This worked - sort of - until the hiring manager followed me to a lunch spot while grilling me rapid-fire on 50+ JavaScript questions. He ignored my (repeated) requests for a quick mental break. Another no-go. Finally, I found a viable solution - bringing a large green smoothie to each interview. This was a lot better than trying to sneak peanut M&Ms into my mouth in the restroom (besides, I was usually escorted to-and-from the bathroom so that wasn’t really an option). No, really. Tired but blood-sugared after onsite 5. Step 11: Refine answers to behavioral questions and avoid burn out Where do you see yourself in 5 years? One of the questions that stumped me in interviews was, “Where do you see yourself in 5 years?” To be honest, I still don’t know. There’s a manager track and an individual contributor track. There are plenty of engineering career paths that I still don’t fully understand - web, mobile, site reliability, and DevOps, to name a few. Then there’s back-end, front-end, and full-stack. Sometimes the lines between these roles are clear, sometimes they’re blurred. What I learned during the course of my search is that while I don’t know which path I’ll take, there are certain tasks I like more and less than others. I don’t love playing with pixels on a website, but it’s fun to design for mobile. Designing architecture and setting up a database is a bit tedious, but I enjoy taking large amounts of data and manipulating it or spinning it into an interesting visualization. So who knows where I’ll end up. For now, I’m going to try to do what’s fun and exciting. Some general thoughts Coding challenges are a learning opportunity There were plenty of coding challenges that I attempted and was ultimately too embarrassed to turn in. Then, there were some that I didn’t finish but turned in anyway, along with an explanation of what debugging steps I had taken along the way. At first I saw incomplete coding challenges as a sign of my own inability - some days I wondered if I wasn’t cut out to be an engineer. But they became fun when I shifted my mindset and started to think about what I learned from each one. For example, one of them gave me a deeper understanding of asynchronous API calls, while another helped me realize the importance of addressing edge cases and error messages. One taught me how to debug Ruby on Rails. Take rejections in stride The same was true of each tech screen and onsite. At first, the rejections stung and fueled my insecurities. Then, rejections became normal. I learned a lot more when I was able to brush aside my self-doubt and be curious about what I could learn from each engineer taking time to speak with me. This was not how I felt most of the time, but I tried Everyone has a different way of approaching problems, and I was fortunate to learn from a few dozen engineers in the industry through the interview process. Find a mentor I was lucky enough to have an all-star mentor throughout the interview process. For three months, my mentor called and emailed every week to ask how the job search was progressing and what blockers I was facing. I’ve heard a lot of fellow engineers say a mentor sounds nice, but they weren’t sure what questions to ask. Sometimes we spoke about tactics, such as how many applications to send, how to write an effective Git commit, or how to go above and beyond in a coding challenge. Other times, she simply reminded me that despite the (many) rejections, I was becoming a stronger engineer and therefore closer to finding my dream company every day. A mentor can keep you accountable to your goals, help you through feelings of burn out, and connect you to the right resources for deeper learning. I’m grateful to have had an advocate throughout the job search process, and I’m looking forward to being able to pay it forward as a new engineer! Conclusion Some days, switching careers felt much harder than I expected. My mentor can certainly attest that there plenty of days when I wasn’t sure I could do it. Becoming an engineer took a lot of hustling. It meant reaching out and expanding my professional network, becoming comfortable with the fact that there’s a huge learning curve ahead of me, and ignoring all the naysayers. It meant finding the right online resources that worked with my learning style. It meant quieting the part of my brain that told me I wasn’t capable and instead focusing on gleaning new knowledge. Every day I worked on a new project, studied a new algorithm, or answered questions in an interview, I became a better engineer. Was the struggle worth it? Absolutely. I’m thrilled to say that I found a role that’s perfect for me, where I can continue learning and growing. My professional network is stronger than ever, and most of all, I gained confidence in knowing that I can put in the work to make my dreams a reality.
https://medium.com/free-code-camp/how-i-switched-careers-to-become-a-software-engineer-in-11-months-and-how-you-can-too-9849afabc126
['Amanda Bullington']
2018-10-09 17:10:48.655000+00:00
['Life Lessons', 'Software Development', 'Career Advice', 'Technology', 'Career Change']
689
Systems Thinking My Way Through: Automation
Modern Economy So that was the warm up. A brief glimpse of automation 200 years ago. Now comes the hard part, thinking about the future. To frame it there’s a few things I want to highlight. First off we live in a precarious system. Unemployment in the Great Depression wasn’t 90%, wasn’t 50%, it was 25%. In the Great Recession it barely grazed 10%. The now ubiquitous Oxford study from 2013 estimated 47% of US Jobs cold be computerized in the next decade or two. I don’t know if that’s true. All I’m saying isif it’s even half true we would be in the most severe economic crisis in almost a century. When listening to those who are most concerned with automation, their emphasis is often not just on the quantity of jobs that become obsolete. They’re more focused on the rapid, even accelerating speed at which it will happen. It’s more complicated than just just quantity of old jobs vs quantity of new jobs. For example if 50% of jobs are automated in 50 years and an equal number of jobs are created we should make it out OK. If 50% of jobs are automated in 10 years, even if we create an equal number of jobs it’s hard to imagine that going smoothly. To elaborate on that if we’re turning over not just jobs but entire careers every few years at a time when a 40% of people can’t afford a $400 emergency, education costs are at an all time high, and employers are reluctant to train their own workers, it’s hard to imagine not having a crisis of some sort. We haven’t granted ourselves much wiggle room here. Of course I’m aware of the absurdity that higher productivity could somehow lead to more poverty, but this is the situation we are in. Stephen Hawking’s last Reddit AMA is evergreen. I guess you could say “There must be something rotten in the very core of a social system which increases its wealth without diminishing its misery.” If only I could recall who said that. Moving on… First I tried to make a systems map of some issues facing the modern US economy. After this we’ll look at how automation and potential reforms would fit in. We’ll start in the middle. Two elements that could theoretically protect Labor power are unionization and labor laws, (the red in the middle.) neither of which are particularly significant factors in the US at the moment. Link On the other hand, elements that detract from labor power. Capital mobility, offshoring, outsourcing, etc. have taken off in the past 50 years. That’s the cream colored stuff on the left. Link When the labor market becomes more competitive people want to be more professional and more educated to get better jobs. More people want to go to college, leading to higher tuition and higher personal debt. (Light blue on top) Higher personal debt of course leads to long term declines in consumer spending. Link Now we’re going to move to underemployment see what that does. That’s the cream at the bottom and moves up to the dark blue or purple. (I’m colorblind and don’t feel like phoning a friend.) Underemployment increases, more people are in part time work or in the gig economy. As a result they likely do not have the benefits they may have had. (They may not have had them anyway.) When underemployment becomes chronic, we have a labor exodus, as people stop looking for work. Link This labor exodus is contributes to coexisting low unemployment wages since people who have given up looking for work are not counted as unemployed. Link This exodus is exacerbated by the fact that one thing which actually has gotten cheaper is entertainment. Link Flat screen TV’s and Netflix are tempting ways to pass the time if you have low expectations anyway. Lower wages, a labor exodus, and more money going to healthcare expenditures, all lead to a decline in consumer spending, which theoretically leads to a decline in corporate profits, although empirically they still seem to be managing for now. Link So now let’s look at the same map in a future of automation with a few potential reforms thrown in. I tried to stick mainly to the reforms that are within the realm of discourse today. You can say these aren’t particularly imaginative, and that they only tweak the system, which is true. Radical new systems don’t fit on the same systems map. That’s kind of the point. They’re new systems. Starting at the top, going counter clockwise. 1. Universal Basic Income: It has become a trendy topic that intersects economists, and futurists. Its main impact here would be maintaining or increasing consumer demand. And I’m assuming it’s being implemented in good faith and it isn’t just an excuse to cut other forms of welfare. 2. Free Public College: It would make attending university more financially attractive, so enrollment would increase. Personal debt from university would go down. Tuition would become functionally cheaper for students, but not necessarily cheaper overall. State funded education could vary significantly based on the retraining needs of workers and employers in a future of rapid automation. 3. Nationalization: It would obviously reduce corporate profits. It’s not really a popular idea at the moment, but I think it is a possibility. It would probably only happen on a large scale, as a response to an existential threat like war or climate change. 4. Global Tax on Capital: In the off chance that Piketty’s idea gains traction, this best fits as a reduction of corporate profits. It’s not perfect but it works well enough here. 5. Worker Hours Reduced: If productivity continues to increase, a potential response could be to reduce the expected working hours of the population as Keynes predicted. 6. Automation: This process is always happening to some extent, but if it happens rapidly on a broad scale, it would have the most impact. The rapid increase in structural unemployment is what would likely send the system into chaos. 7. Right to a Job: the idea has been injected into US political discourse for the first time in decades. If it became a reality, it would increase the power of labor, and reduce underemployment. 8. Single Payer: This change is probably coming regardless of automation, but ensuring healthcare regardless of employment would be an even more significant element in a system potentially stressed by unemployment. There are plenty of ways to minimize growing pains and each will have their own side effects. (Including effects we can’t see here.) I don’t see any silver bullets here; each has its own potential drawbacks. I think the main takeaway though is that even without much imagination we have enough ways to mitigate whatever economic growing pains we may face to be optimistic about the future of work. And each solution listed fits differently into the map meaning we can be somewhat surgical with our reforms. What concerns me is more what Stephen hawking said. Solutions can be as clever or as ham-fisted as we like. The hard part is actually accomplishing them politically. Each dollar spent on relief for workers has to prove it’s a better cause than more planes that don’t work and ships that keep crashing into each other.
https://medium.com/@econsystemsthinking/systems-thinking-my-way-through-automation-f4da19a73a90
['Econsystems Thinking']
2019-10-28 02:18:06.289000+00:00
['Systems Thinking', '4th Industrial Revolution', 'Political Economy', 'Automation', 'Technology']
690
Komodo: A Detailed Guide
Originally published in the NOWNodes blog. The Komodo platform, aimed at bringing new standards in cryptocurrency security and anonymity protocols, was founded in September 2016. Headquartered in San Gwann, Malta, the Komodo platform leverages the Zcash Zero-Knowledge proofs to help its users make 100% untraceable transactions. These transactions are protected by Bitcoin’s Petahash Proof of Work mechanism. Komodo’s Unique Blockchain Platform The blockchain platform offered by Komodo is open and composable. These fully composable solutions, built on multi-chain designs. It proves beneficial for solo developers, growing startups, and comparatively larger enterprises alike. These entities can leverage this aspect of composability to create a customized blockchain that is capable of hosting applications, software, and any other type of blockchain-based solution. The composability of Komodo’s blockchain provides a wide variety of components that can be activated as and when needed and can be used in different combinations and configurations so that they can meet the specific needs of a specific use case. Although launched in 2016, the development of the platform started further back in 2014. The initial development process took its cue from the tested technology paradigms of Bitcoin and ZCash. In offering customization facilities relating to the blockchain, Komodo enables projects to build a fully customizable and independent Antara smart chain that remains connected to the multi-chain ecosystem of Komodo through platform synchronizations. The smart chain is protected with bitcoin level security. Antara equips Komodo with the provisions of offering a toolkit for blockchain. The different development tools within this kit enable the users to build on advanced business logic. This facility is complemented by Komodo’s on-chain native app support. The multi-chain design of Komodo allows many sovereign smart chain projects to co-exist in the same ecosystem with each one having its own consensus, network, and coin. However, these independent chains are capable of doing cross-chain transfers of value and logic leveraging Komodo Core’s Platform Synchronization technology. The Antara smart chains keep on updating themselves with the progress of time and technology. All these smart chains are future proof and the users of these smart chains keep receiving free updates as and when they are released. New features are also added to the base Komodo technology stack on frequent intervals keeping the technology dynamic. Developers, start-ups, and enterprises who build their own independent smart chain to meet their needs are free to scale up when they feel. The Antara smart chain allows them to scale on demand by bringing together multiple smart chains under the ambit of one logical chain. Not only expansion in the number of chains, but Antara smart chain technology also enables scaling up in terms of functionalities of the chain. One can introduce several new features and options to their chain, including oracles, tokens, trustless price feeds, stablecoins, etc. Apart from enabling projects to create custom, independent smart chains, Komodo also helps businesses and individual users with a bunch of necessary white-label products, including multi-coin wallets, atomic swap DEX, custom block explorers, and hosted infrastructure solutions. In the next segments, we will have a detailed look into these offerings of Komodo. Atomic Swap DEX and Multi-Currency Wallet The AtomicDEX offered by Komodo combines a highly secure wallet with a non-custodial decentralized exchange. Users can leverage this combine to store their holdings and trade on a peer-to-peer basis without having to give up control over their funds. The multi-currency wallet helps frequent crypto users to get rid of a host of different wallets required to hold each different type of coins. One can conveniently store their BTC, ETH, and a dozen more types of coins in the test version of the wallet in a secure and non-custodial way. Although the current Beta release of the wallet allows a dozen types, the final launch will have the capacity to store more than hundreds of different types of crypto coins. On the other hand, the biggest facility provided by the Atomic Swap DEX is that it lets the users trade any coin without having to go through an exchange first. Since one does not have to go through an exchange to trade, there is no giving up of control over their funds. Atomic Swap DEX is known as the industry’s most advanced, mobile-first atomic swap protocol so far. The Benefits of Komodo’s Atomic Swap DEX The next-generation decentralized exchange or DEX technology introduced by Komodo addresses a lot of legacy issues when it comes to the trading of digital assets. Although the blockchain technology is pivoted on decentralization, the trading in the blockchain is still largely dependent on the centralized exchange services. But, these CEX services have their limitations and disadvantages. Centralized exchanges suffer from a lack of security. According to estimates, in 2018 alone, all centralized exchanges combined lost digital assets worth US$1 billion to hackers and cyber attackers. It becomes a matter of grave concern for the users as they need to give up the control of their funds before going to trade through centralized exchanges. Trading through centralized exchanges proves costlier than average. Not only the user has to pay 0.33% trading fees applied on the value of the trade, but one also needs to pay fees for withdrawal and regaining one’s control over his or her funds. Centralized exchanges also fail to provide a good range of trading pairs. Most of the CEX services have one or two trading pairs for most of the coins and tokens. The AtomicDEX offered by Komodo takes care of each of these problems. It offers the highest standard of security protocol to its users who, under any condition, do not have to give up control over their funds by sharing their private key with the exchange. The Komodo DEX services also offer a bouquet of unrestricted trading pairs to its users where any asset is eligible for direct swapping. The fees are also much less on the DEX as compared to any centralized exchange services. In the AtomicSwapDEX, while the market makers do not need to pay any fees, the takers need to pay only 0.15% Komodo: Tokenomics The native token of Komodo is the KMD token. The total supply of the KMD token is fixed at an upper limit of 200 million tokens. Out of these 200 million, 100 million were pre-mined and were distributed through an ICO or Initial Coin Offering. Of these 100 million Komodo tokens that went into ICO, 90% were available for the investors who participated in the ICO. The rest of the 10% was set aside for the future development of the platform. Apart from the 100 million KMD tokens, the other 100 million are decomposed by Komodo through its proof-of-work consensus mechanism. Komodo users with more than 10 KMD tokens held in a single address are eligible for a 5% reward. To give an idea about the trading potential of Komodo, its price reached an all-time high of USD 12.23 on December 22, 2017, trading at a volume of more than USD 1.2 billion. Latest Komodo Updates Komodo platform completed its first successful stress test on 31st October 2020. During this stress test, a total of 2000 traders attempted performing swaps through the AtomicDEX. The test enriched Komodo developers with a host of valuable data for them to test the veracity of Market Maker 2, Komodo’s highly-sophisticated order-matching technology, based in Libtorrent. This update to Market Maker 2 will make Komodo far more useful than its previous versions. It enables the DEX with the whole package of free battle-tested servers for DHT bootstrap and a significantly large network of DHT peers. It’s also helpful in correcting and upgrading the network address translations and anomalies relating to routing. The use of Libtorrent in Market Maker 2 makes the AtomicDEX eligible to operate while switching IPs. It becomes particularly useful when the user is mobile and might be using different Wi-Fi routers, cellular towers, etc. As a provider of the next generation of DEX technology, the Komodo platform has established itself as a major driving force in the evolution of blockchain technology. With a strong foundation, comprised of the Antara framework, delayed proof of work mechanism, atomic swaps, and platform synchronization features, Komodo is expected to continue its run as an innovation and thought leader in the world of blockchain in the days to come. Also, Read
https://medium.com/coinmonks/komodo-a-detailed-guide-694c4e77c79a
[]
2020-12-28 13:02:12.108000+00:00
['Technology', 'Atomic Dex', 'Komodo', 'Blockchain', 'Cryptocurrency']
691
Everyone Doesn’t Need To Create An Online Business To Have An Opportunity To Work From Home And Make An Honest Income Online, Or Even Possibly Make Six Figures?! Here’s How
Everyone Doesn’t Need To Create An Online Business To Have An Opportunity To Work From Home And Make An Honest Income Online, Or Even Possibly Make Six Figures?! Here’s How Michael Rocha Jan 14, 2020·9 min read So, I am up late working on a project report. Similar to a proposal, service agreement list and any other similar terminologies that I’m not thinking about right now. And I was just thinking…,” how many people actually know! that you can still work from home and make a very reasonable income without actually creating a business or have no intention of creating a business”? The reason why I am bringing it up (maybe the second time lol) is because I honestly didn’t even think about online or remote employment! If you’re that human being and you’re reading this. Continue on! You will be like the image below. If you’re in the msrket, of course… It also doesn’t help. When every other ad or guru pops up! They’re talking about creating and running a successful business!? And how they can change your life in X days or have the secret MOJO of Austin Powers, the spy who smacked the guru! 🤚🏽 Now don’t get me wrong! I like a good ad and marketing tactic just as much as the next person who likes a great marketing strategy! However, I do agree that creating and running a business successfully is an “ideal” scenario for anyone! Who is willing and able to turn said business into something real and earn tangible results and fulfillment from their business! Going solely after the money never really turns into a meaningful and fulfilled business. When I’ve focused on results, helping, and working hard it always “works out” or certain business investments comes to fruition! However! There are men and women out in this global “Web” that call the internet’s! have no desire to start a business what so ever! PERIOD! This, “these, and/or “Those” individuals. For the most part. Are Minimal, live very modest, or they can be with their spouse of course too! Lol. Solely from experience that’s what is seems like! Typically more common in retired vets, older men and women simply looking to find part time employment option that suits their needs! For an extended period of time or In definitely. As that position maybe what they do for the rest of their lives. Who knows, right!? Good for them! Depends on the company, those “Individuals” generally get hired rather quickly! If those positions are still available, the reason is because they fill positions that are typically harder to fill! Take someone who so strictly looking for dayshift or a certain shift in general. Dayshift is more than likely going to be a hot commodity in any business, right? Funny enough. If those “individuals” came in with an application in hand for a part time position and they’re open to whatever. Within reason. They will get employed first! As long as they have the basic qualifications to perform said duty’s. Oddly enough, in some cases, it’s very easy to get acquired or employed by a company that has the necessary remote positions available. You may be surprised when you search and look at the first company you chose to research! There are hundreds if not hundreds of thousands of company’s that need support! Don’t let the title of the position fool you! Support is a very, very, mucho,and mucho importante posicion! Because without customers your business doesn’t survive! Very long at least. It helps the company’s retain customers and reduce churn! Which could be a bonus metric for their employees! Some company’s also have support “retention teams” to help with retaining closed accounts. Their main duty is to retain! This is why platforms like Zendesk and-or Intercom Do so well! Because they know! Just how important a support department is and or can be! While also being able to draw or pull metrics from said platform that has been used currently or prior. (yes, there’s more support software options available, I know but I’m busy here lol can’t you tell? Lol JK how important sales are, but they also I’ve personally used both of those platforms! In live action settings while also retrieving email responses (yes there are several others. However I’ve only used these two majority of the time) Speaking of support platforms themselves. You could apply to work as a support agent. In a support driven, creative, and one powerful piece of software! How about them apples?!…how more “support” can ya get? That’s my country accent coming out, fellas! By golly we will teach these fellers a lesson about being a support agent 😂 Now, let me be clear. This is one example of a position. Are there other positions within support departments? Absolutely! There are many other positions online that is within a support department and also not “support” related. However, I’m going to use “support” as my example. With all the newcomers, up and coming tech/ software company’s. They all have a support department or will need one very soon! Notice how I mention, “new” or “up and coming” tech companies! And not well established… (I will get into the pros and cons of new tech companies vs well-established tech companies for employment of a potential support position with either, said tech company) New Tech Company: -Support Agent or also known as Technical Support Agent. We can also mention the position titles “Customer Service” agents as well. For the sake of this example to cover the bases. A new tech company has great potential to rise in the ranks! And even become a partner of a said tech company! (When I say “tech” company. I’m mostly referring to SAAS companies. Which means Software As A Service. Not every single tech company/s are CREATED EQUAL! A good portion of tech companies acquires funding to create, launch, and scale their company. So do your due diligence and research the company and see what you can find on them. I highly recommend a software company that is “fairly” new. Potentially at least one year but not longer than three years. That tends to be a sweet spot! However, if you are wanting and/or willing to work from home and have the ability to rise fast and see if they have partner options then find a software company with one year or less in business. Some software companies that have been in business for more than a year, may also have the same options. Remember, do your due diligence! Why? let’s save the “why” for when we get into “well established” companies. As that will more than likely answer your question of, why. Let’s get back on track… Now, we need to cover or “button up” a few details real quick. As, I said. Not every tech company is created equal. They will have a similar internal structure and some may have a completely different internal structure. New Tech cont’d… And I was just thinking…,” how many people know, that you can still work from home and make a very reasonable income without actually creating a business or have no intention of creating a business”? 1- Potential to rise quickly 2- Potential to partner (let’s continue from here) 3- Potential to build a companies internal structure 4- Potential to fill your resume with powerful skills, management, troubleshooting, and much more… 5- Company benefits (Some do) 6- Company culture I will stop here. Or we will have a book of a post 😧📚 Cons of new tech. 1- No benefits 2- No internal structure established (mainly employee department-based structure) 3- No processes or standard operating procedures 4- No clear pay scales 5- Entirely funded by someone else money (could be a pro too, but I tend to consider this to be a con more than a pro personally) (some may say, “It more secure”, however, i disagree) 6- This plays on the internal structure, processes, and SOPs not being established. — When there is less or minimal or no internal structure what so ever, this can cause a lack of good company culture) 7- Company Culture This is good for the con/s. They will have a similar internal structure and some may have a completely different internal structure.e. just graduated. Whatever the case is… research!!! Find what aligns with you personally and where you see yourself in 5–10 years. Then choose from there and be mindful of the following with new tech. I rate these higher, personally. 1- Company culture. 2- Internal structure 3- Mission statement and what the company is planning to achieve and or do. This is good for the con. ling or wanting to work from home because you have kids, you’re retired, just graduated. Whatever the case is… tech.ll-known tech company. They’re well established now, but remember. They were new and up and coming at one point in time. So that’s why I said, go with a newer tech company. Uber! hate em or love em. They did something quite amazing! Even tho the creator who started it all or co-creator. Travis Kalanick had feverishly worked on new tech companies, founded some, lost some and made some. The most recent tech company he “started” was Uber! Which he owned (in 2019) roughly 114 million shares. Which he could dump all fo them if he wanted to. Because that a cool, $5 BILLION! (A little FYI of Uber. In August of 2018 they raised $500 million from Toyota) I hope you all found this helpful. If you all would like more content like this post. Let me know. Or if you would like help preparing a resume live here on Facebook. We can most certainly arrange that too. anyway, I got derailed there. Back on the wagon here. Remember when I said there are really two types of tech companies that are founded? Built by raised capital (Angel Investors) Bootstrapped. I’m probably more biased on which one I have more respect for. I do prefer a tech company that completely bootstraps and makes sales via a product or service to grow and be successful, then a company who received 10 million to take off and be super successful! However, it does take a great leader and team to keep a tech (SAAS) open and profitable not to mention scaling one! Which is probably the biggest hurdle of the three. You see, these are all solid tips, topics of discussion points, and just overall important factors to consider in your job hunt. Back to Uber. There was a story released by one of those major publishing companies not that long ago, about two early employees of Uber. Or at least one of them was, which I am almost certain. it was a woman (unless I’m getting Uber and Slack mixed up or mixed up entirely) I would do some research. However, Im about to pass our in a nice comfortable bed, and get right back up in about 5 hours. So, Im am wrapping up this post/content piece. However, you decide to “Dice” it, “Slice it”, “Chop it” the decision soletley lies on YOUR shoulder. I’m probably more biased on which one I have more respect for. I do prefer a tech company that completely bootstraps and makes sales via a product or service to grow and be successful, then a company who received 10 million to take off and be super successful! I'm here to tell you, that’ DOES NOT matter! How do I know? I’ve done it. You may use a particular software everyday…? That you love and would recommend to anyone, anywhere! Go to their career page or email them directly and have a resume in hand! and be mindful. These are tech companies and they may ask for a video in your application. I hope you all found this helpful. If you all would like more content like this post. Let me know. Or if you would like help preparing a resume live here on Facebook. We can most certainly arrange that too. Good night everyone, Mike is OUTTIE! P.S: Would you prefer I put together a free resource that lists all the tech companies currently, that you would want to apply to? Or look further into upon post creation? I can do the heavy lifting while you do the applying and employing :) P.S.S: There are more details to consider, I just had to cut it short. Really think about what you want to do or why you would need too? You could very well be a support department director or senior lead! And possibly and team type of supervisor! And you will do all the work from your laptop! Oh yea! Some company’s will also pay for or send you a new laptop. Some company’s do chrome books, some do Microsoft, and of course good old MacBooks. Not a crazy amount of macs are given out that I know of, off hand. I do know there are some for sure. Chrome books tend to be more budget friendly, while also appeasing to the company’s HR depot and or the company’s investors/owners. For their security! Google for business. Plus an enterprise registration on that new Chromebook will lock that computer down solely to the company. So if you do get one sent and it’s not mandatory?! Don’t enroll in the enterprise! For warning! Ok finally and officially signing off for the night or should I say early morning at 4:30 am est! Bring on the coffee! Ok bye, Mike.
https://medium.com/@michaelrocha365/not-everyone-needs-to-create-an-online-business-to-work-from-home-and-make-an-honest-income-2d44508d5395
['Michael Rocha']
2020-01-14 18:27:34.834000+00:00
['Affiliate', 'Technology News', 'New Business Opportunity', 'Freelancers', 'Self Employment']
692
New England’s Series A Deals — Part III
Recently, The Buzz profiled the a number of market maps on the New England Venture/Startup Ecosystem including the following: These market maps continue to be very positively received by our subscribers so we’re continuing to deliver. This week, we have the final installment of the New England Series A Deals articles. Here’s the list:
https://medium.com/the-startup-buzz/new-englands-series-a-deals-part-iii-8f597cbce0f0
['Matt Snow']
2020-11-14 20:02:17.124000+00:00
['Startup', 'New England', 'Venture Capital', 'Fundraising', 'Technology']
693
Digital Marketing ft. Esther Kinuthia — Episode 006
Esther Kinuthia is a Young Kenyan in Tech, currently based in Nairobi, Kenya. She recently pursued her Masters in Digital Marketing Strategy at Trinity Business College in Ireland, and she joined us on the podcast to discuss her journey in her path thus far, lessons learnt and how a digital strategy can help you elevate your business. She invited the Tech for Granted team for a meet up she had organised with her group, Young Kenyans in Technology (#YKT), and members of Centum Real Estate and Nabo Capital. The session was filled with gems on real estate and investment opportunities, and as a team, it was more than an honor to be part of it, and a lot of new networks were created. You can find a recap of the event on Young Kenyans in Technology YouTube channel. Esther’s drive is quite infectious and she commands a lot of excellence in her pursuits. You can listen to the episode on anchor, leave a comment and share with a friend. Enjoy! Hosted by Sound by Bukachi Akatsa of For the Culture Videographer and Producer
https://medium.com/tech-for-granted/tech-for-granted-podcast-episode-006-digital-marketing-ft-esther-kinuthia-b3cb7336b991
['Tech For Granted']
2019-07-02 09:25:08.229000+00:00
['Podcast', 'Kenyans In Technology', 'Tech For Granted', 'Digital Marketing', 'Excellence']
694
From Monolith to Microservices
How we got started with microservices We started with just a few microservices, as we read that knowing when to create a new microservice is as much of an art as it is a science. With this in mind, we erred on the side of caution and only split out the obvious pieces first. We decided to keep our user, authentication, model metadata, alerts, and tenant management logic in a service and split out the following pieces into separate microservices: inference ingestion inference & metric retrieval Inference ingestion was an obvious candidate for a new service because it could be easily decoupled from the rest of the platform, and it was something that we knew would have to scale horizontally to meet customer data load. We designed inference ingestion as a lightweight service that makes use of Kafka to ingest both streaming data and batch data. From here, it naturally made sense to create an additional service that was responsible for retrieving inferences and calculating metrics on these inferences. It was important to consider performance here and we knew that our choice of data store was crucial to our ability to serve up insights quickly. With this in mind, we were able to leverage the expertise of our data engineers to choose the right data store for our inferences and write the microservice that was responsible for retrieving inferences and calculating metrics. While our data engineers worked to build these microservices, the remaining backend engineers began tackling API design and the remaining product work. After designing our APIs and getting through the implementation of the user, authentication, metadata, alerts, and tenant management, it became evident that the alerting part of our platform was a good candidate for its own microservice. Below you’ll find a simplified diagram of the architecture we arrived at.
https://medium.com/arthur-engineering/from-monolith-to-microservices-fed2fe56478c
['Kelsey Kerr']
2020-12-11 21:05:42.597000+00:00
['Microservices', 'Go', 'Startup', 'Technology', 'Software Development']
695
Daily Blockchain Use Cases and News #12
Steve Wozniak (co-founder of Apple) announced, that he is working with crypto startup Equi Capital Shelf.Network, a blockchain auctioning startup from Berlin, was accepted for the YCombinators advisory path program China banned multiple blockchain & crypto news accounts and event venues hosting crypto events Forbes listed 20 use cases for blockchain in education Global automotive blockchain market is to reach $1.6 billion by 2026, as reported by Business Intelligence and Strategy Research (BIS Research)
https://medium.com/blockchaincircle/daily-blockchain-use-cases-and-news-12-49f197fd3bd1
['Pavel Romanenko']
2018-08-22 09:54:25.623000+00:00
['Bitcoin', 'News', 'Cryptocurrency', 'Blockchain', 'Technology']
696
You’re Wrong: Apple Can Justify the $550 Price Tag
Another reason they’re so expensive is that they’re just really high quality. The headphones make the other competitors look old and bulky. When compared side-by-side, the Sony’s look a decade old. Plus Apple chose to use metal materials for earcups and much nicer plastics on other parts. The build quality just overall looks so much better. Sure the metal makes them a lot heavier. But Apple’s willing to take that sacrifice. It just looks so much better. Features — Specifically Spatial Audio We’ve gotten a taste of the Spatial Audio magic on the AirPods Pro. And while I haven’t had an opportunity to try out Spatial Audio on the AirPods Max yet, I’m confident it’ll be even better than the Pros due to the additional H1 processor and a total of 9 microphones. Apple When you first hear about it, Spatial Audio seems like a gimmicky feature that’s trying to be the next-gen surround sound. The thing is, it’s actually really immersive and brings more depth to movies. I recently recommended my friend to try out Spatial Audio. This is what I got back: whoops Look, if you have access to AirPods Pro (or AirPods Max for that matter) you should try out the feature. It’s supported on all Apple TV+ shows, Hulu, and Disney+. Unfortunately, it has not come to Netflix or Prime yet. The feature is also waiting to be added to Macs and Apple TVs for some reason. Regardless of Spatial Audio, the AirPods “brand” has a bunch of other features that you don’t see on competitors. Starting with the AirPods connecting “magic”, Apple has the experience refined to a tea. Reviews are also starting to say that the transparency mode on AirPods Max is also next-level. The transparency mode takes cues from Spatial Audio and has the same spatial effect but with your actual surroundings. You’re more immersed in real life (pretty sad sentence, I know). It’s the Little Things Combine all this and we’re left with the question of whether the $200+ price difference is worth it. Apple obviously says yes. And I’m, hesitantly, kinda agreeing with them. Yes, $550 is a lot of money. And no, I won’t be buying them anytime soon. But the headphone landscape is overall pretty expensive in the first place. Forgetting about the Sony XM4’s or Bose 700’s, the $500 range is pretty affordable and fair. And with those extra “Apple magic” features, the headphones themselves blow the Sony’s and Bose away like nothing. I’m putting my bets on this product having a decent amount of success. Feel free to tell me why I’m wrong in the comments.
https://medium.com/macoclock/youre-wrong-apple-can-justify-the-550-price-tag-69255170d3f7
['Henry Gruett']
2020-12-26 18:09:31.823000+00:00
['Technology News', 'Technology', 'Apple', 'Technews', 'Tech']
697
“Using College to Start Your Career Right” — Interview with Jesse Steinweg-Woods -Ph.D, Senior Data Scientist at tronc
Vimarsh Karbhari(VK): What are the top three books about AI/ML/DS have you liked the most? What books have had the most impact in your career? Jesse Steinweg-Woods(JS): I have many other book reviews at my website: Book Reviews VK: What tool/tools (software/hardware/habit) that you have as a Data Scientist has the most impact on your work? JS: Github and the Python library pandas. Github because it allows all of my code to be in one place and allows me to share it with my team members (along with use some of their code in my work). It also allows version control, code reviews, and a backup for my code. Pandas because it allows me to prototype quickly and load data in a variety of formats easily. VK: Can you share about the Data Science related failures/projects/experiments that you have learned from the most? JS: Bugs in the recommender system I helped build showed me the importance of logging machine learning systems that are used in a production setting. I also had a project at my last company where the project didn’t work because my assumptions about how the data was being collected were totally flawed. In the churn model I designed, negative examples weren’t being generated properly and were introducing a bias in the model. I didn’t detect this until I compared the underlying distributions of the features. Check this if a model isn’t working properly in real life to see if it had some sort of a training bias. VK: If you were to write a book what would be the title of the book? What would be the main topics you would cover in the book? JS:“Using College to Start Your Career Right.” I don’t think I handled college quite right the first time. Universities don’t focus enough on teaching students career planning skills, and I feel this is sorely lacking in higher education. I would like to help solve this problem with a book from my own perspective. VK: In terms of time, money or energy what are the best investments you have made which have given you compounded rewards in your career? — Any book, project, conference, meetups can be anything. JS: Designing my own website and pet projects. I learned so much about data science and software engineering doing this. Following top data scientists and machine learning academics on Twitter. You learn a lot that way about new publications, novel applications, or just funny anecdotes others in the field share. Networking with other data scientists. This provides a great way to share ideas and new discoveries, along with tips from others that can help you in your own projects. VK: What are some absurd ideas around data science experiments/projects that are not intuitive to people looking from outside in? JS: Sometimes people don’t really understand how machine learning works. Those unfamiliar with it think the computer is “thinking for itself” when all we are really doing is trying to generate a program that can intuit something based on past examples that can be applied to future situations. VK: In the last year, what has improved your work life which could benefit others? JS: Try to minimize meetings unless they are absolutely essential. I find I am more productive this way. VK: What advice would you give to someone starting in this field? What advice should they ignore? JS: Aim for low-hanging fruit/easy wins first. Sometimes data scientists try to take on projects that are too complicated when simpler ones could be finished in far less time and generate results for your organization much sooner. I would ignore those who say you absolutely need to know big data tools/deep learning right off the bat, because most likely you won’t need them at first to solve many of your company’s problems. VK: What is bad recommendations given in data science in your opinion? JS: People who claim you can become a data scientist in just six months without a closely related background are probably not correct in most cases. The field is very broad and there is a lot to learn. I also don’t like it when people mix up data scientist/data analyst roles. They are not the same thing in my opinion. Both serve different needs and have different skill sets. VK: How do you determine saying no to experiments/projects? JS: I try to weigh the impact a project could deliver to the organization along with how much time I think it would take to do it. I prioritize those projects that can deliver the most impact in the least amount of time first. It depends on what data is available however, as you may need data that isn’t yet available to do the project. VK: Do you ever feel overwhelmed by the amount of data or size of the experiment or a data problem? If yes what do you do to clear your mind? JS: When starting out with a new database, especially if it is poorly documented (and they usually are unfortunately), it can be incredibly easy to be overwhelmed. I try to start one table at a time inside the database and figure out what the columns mean and whether they may be of value to me. I also try to figure out how the tables are related to each other and how to join them together. This requires patience but you eventually get through it. VK: How do you think about presenting your hypothesis/outcomes once you have reached a solution/finding? JS: I try to put myself in the other person’s shoes and ask what would be the best way of getting them to understand a project outcome. This can be especially difficult if the other person isn’t very quantitative by nature, so in this case you really have to narrow down the outcomes of your experiment or project to the absolutely essential parts. VK: What is the role of intuition in your day to day job and in making big decisions at work? JS: Intuition definitely helps when deciding what features may be good in a model. It also helps in deciding which projects are worth doing. Unfortunately, both of these only get better with experience. VK: In your opinion what is the ideal Organizational placement for a data team? JS: It honestly depends on the team and company. I think Type B data scientists (more focused on software engineering) need to be closely paired with engineering. Type A data scientists (more focused on analysis) need to be closely paired with product or the CEO. VK: If you could redo your career today, what would you do? JS: I probably would have either taken more software engineering classes during college or done an internship that had a larger software engineering component during undergrad than I did. I had to learn a lot of software engineering in a very short period of time. While that was exciting and I eventually caught up, it was also challenging. I think it would have been less so if I had more of a familiarity with software engineering best practices before I started to transition into data science out of academia. Software engineering in academia is very different from industry. VK: What are your filters to reduce bias in an experiment? JS: Make sure the distributions in both your control and treatment groups are as similar as possible along with being representative. Randomization is a good way to help reduce bias. VK: When you hire Data Scientists or Data Engineers or ML Engineers what are the top three technical/non — technical skills you are looking for? JS: Assuming I wanted to hire a data scientist (more focused on building products, similar to a machine learning engineer): Strong knowledge of machine learning Decent software engineering skills Good communicator VK: What online blogs/people do you follow for getting advice/learning more about DS? JS: I like datatau.com a lot for keeping up to date on things. Twitter is also great if you know who to follow. I prefer to follow a mix of leading researchers in academia and top data scientists at companies. This allows me to get practical tips for projects along with new ideas/tools from research groups. If you want an easy way to get started, just see who I follow on Twitter and branch out from there as you find your own interests. My handle is @jmsteinw.
https://medium.com/acing-ai/in-conversation-with-jesse-steinweg-woods-ph-d-senior-data-scientist-at-tronc-f012fbf3172a
['Vimarsh Karbhari']
2020-02-26 05:54:54.291000+00:00
['Artificial Intelligence', 'Technology', 'Machine Learning', 'Data Science', 'Expert']
698
Implementation of the API Gateway Layer for a Machine Learning Platform on AWS
After defining some of the main concepts in the API world in the previous article, I will talk about the different ways of deploying an API Gateway for the Machine Learning platform. In this article, I will use the infrastructure and software layers designed in one of my previous articles. You may want to go through it to have a clearer view of the platform’s architecture before proceeding. As a reminder, the scope of this series of articles is the model serving layer of the ML platform’s framework layer. In other words, its “API Gateway”. Scope of this series of articles, by the author Now let’s start designing! A question may arise: If we are in an AWS environment, why not just use the fully managed and serverless AWS API Gateway? You never know if you don’t try. So let’s try this! 1 | Just the AWS managed API Gateway Here’s how AWS API Gateway could be placed in front of an EKS cluster. AWS API Gateway for the API Gateway layer, by the author First of all, AWS API Gateway is a fully managed service and runs in its own VPC: so we don’t know what’s happening between the scenes or any details about the infrastructure. Thanks to AWS documentation, we know that we can use API Gateway private integrations¹ to get the traffic from the API Gateway’s VPC to our VPC using an API Gateway resource of VpcLink². The Private VpcLink is a great way to provide access to HTTP(S) resources within our VPC without exposing them directly to the public internet. But that’s not all. The VpcLink is here to direct the traffic to a Network Load Balancer (NLB). So the user is responsible for creating an NLB which serves the traffic to the EKS cluster. With the support of NLB in Kubernetes 1.9+, we can create a Kubernetes service of type LoadBalancer With an annotation indicating that It’s a Network load balancer³. That would be a correct setup for the AWS API Gateway on EKS. We could as well benefit from WAF support for the managed API Gateway. The problem with this setup is we have the power of an API Gateway, but it’s far away from our cluster and our services. If we want to use a specific deployment strategy for each service, it would be great if this is done very close to the service (like in the service’s definition itself!). 2 | API Gateway closer to our ML models! Here’s another way of doing things. Design components: AWS API Gateway + NLB + Ambassador API Gateway. AWS API Gateway Combined with Ambassador for the API Gateway layer, by the author In this setup, we put an open-source API Gateway solution closer to our services. I will talk in detail about Ambassador in a future article. For now, let’s just say it’s a powerful open-source API/Ingress Gateway for our ML platform that brings the API Gateway’s features closer to our models. So do we really need the AWS API Gateway? Not really… One downside though, we will lose the WAF advantages for sure if we don’t use the AWS API Gateway. But maybe we can optimize it more! 3 | Eliminate the AWS API Gateway! So let’s eliminate the AWS API Gateway. Design components: NLB in public subnet + Ambassador API Gateway. Public AWS NLB Combined with Ambassador for the API Gateway layer, by the author We just need to put the NLB in a public subnet so that we can receive the public traffic. However, NLB doesn’t understand HTTP/HTTP(s) traffic, it allows only TCP traffic, no HTTPS offloading, and they have none of the nice OSI’s layer 7 features of the Application Load Balancer (ALB). Plus, with an NLB, we still can’t have the advantages of WAF. 4 | Our final design! So, here’s the final setup. Design components: ALB in public subnet + WAF + NLB in private subnet + Ambassador API Gateway. Final setup for the API Gateway layer, by the author As WAF integrates well with Application Load Balancer (ALB), why not get an ALB in front of the NLB. We can get that NLB back to its private subnet as well. One thing to pay attention to though: In this setup, AWS ALB cannot be assigned a static public IP address. So, after some time, the ALB’s IP changes and we lose access to the platform. Two possible solutions: 1. Summon the almighty Amazon Route53: We need to use the DNS name of the ALB instead of its changing IP addresses. To do this: a. We have to migrate our nameservers to Route53 if it’s not already the case. b. Pay attention to mails redirection: Route53 is only a DNS resolver and does not redirect emails. A solution for this could be to use an MX record and a mail server (like Amazon WorkMail). 2. Use AWS Global Accelerator: we never get bored with Amazon. Recently, Amazon launched this new service which could easily solve such a problem. A global accelerator with 2 fixed IPs and a unique DNS name will receive the traffic and direct it to an endpoint group containing our ALB. Here’s a detailed guide on how to use this new feature. Conclusion In this article, I tried to study different deployments of an API Gateway for the Machine Learning platform. Starting from simply using an AWS API Gateway, I tried to find an optimal setup with maximum use of AWS advanced features like WAF. In the next article, I will discuss in detail Ambassador and various concepts behind its existence. If you have any questions, please reach out to me on LinkedIn. [1] https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-private-integration.html [2] https://docs.aws.amazon.com/apigateway/api-reference/resource/vpc-link/ [3] https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support
https://medium.com/swlh/implementation-of-the-api-gateway-layer-for-a-machine-learning-platform-on-aws-589381258391
['Salah Rekik']
2020-11-19 17:33:26.913000+00:00
['Machine Learning', 'Api Gateway', 'AWS', 'Technology', 'Cloud Computing']
699
Ex-SpaceX engineer looks to electrify mobility in emerging markets
Former SpaceX engineer Porter Harris has gone from electrifying rockets to rickshaws in a bid to bring efficient, emissions-free mobility to emerging markets with his new company, Power Global. Officially launching today, Power Global is taking aim at the roughly $16 billion market for three-wheeled transportation in India. While electrification in developed markets focuses primarily on passenger and commercial vehicles, in emerging markets there’s a huge concerted push to clean up motorcycles and three-wheelers. They’ve become the go-to mobility solution in nations where car ownership is lower. They’re also typically powered by either internal combustion engines or lead acid batteries. And neither power source is all that great from an environmental perspective. Lead acid batteries need to be swapped out every six to eight months and are incredibly toxic… while fossil fuels are a leading contributor to global climate change. Enter Power Global. It’s offering owners of three wheelers a subscription service that would upgrade them from either lead acid batteries or internal combustion engines to its swappable battery service. Close up of Power.Global’s battery systems. Image Credit: Power Global The company’s first product, the eZee, is a swappable battery for light vehicles. The company’s co-founders, Harris and Pankaj Dubey, a former Yamaha Motors and Polaris Inc. executive, see their mission as providing electric vehicle and clean energy products to global markets that have been left behind in the world’s push to sustainable mobility. Power Global will launch its services outside of New Delhi, with the goal of planting a kiosk roughly every three kilometers, Harris told TechCrunch in a recent interview. The company also has plans to provide drivers with an app that will allow them to see how many kilometers they’ve traveled, their current battery charge, and where they can find a swapping station. The mobility solutions are also just a point of entry to a broader array of energy services. There are plans in place to add solar panels to the battery charging stations in efforts to provide rural electrification. The company’s lithium ion batteries are projected to last for around five years and after they’re done, the batteries will be sent to a recycler. Right now, the Power Global is starting with battery manufacturing. It aims to have a plant up and running with the capacity to produce about one gigawatt hour worth of batteries (approximately 10,000 Model S packs), according to TechCrunch. The swappable eZee battery offering will be available early next year, with the retrofit services rolling out a bit later. “Do we really need another solution for the top 10% of the world? No, we don’t,” Harris told TechCrunch. “Let’s focus on the other 90% of the world and actually make a difference.” The company is currently taking pre-orders for its swappable batteries now. “We are on a mission to improve access to clean energy solutions in India and other emerging markets by sharing our collective years of expertise in bringing affordable battery technology to market,” said Dubey, co-founder and CEO of Power Global’s India subsidiary, in a statement. “While the eZee™ will give light mobility vehicles new life, it also represents a path to help build local economies with direct and indirect job creation, while supporting evolving regional environmental goals...” Following the launch of the eZee™ battery module, Power Global will announce its first line of Retrofit Kits to convert diesel- and petrol-fueled auto rickshaws into zero-emissions electric vehicles. The swappable eZee™ battery module will also power future product lines, including upcoming applications for second-life stationary storage and automotive sectors.
https://medium.com/foot-notes-by-footprint/ex-spacex-engineer-looks-to-electrify-mobility-in-emerging-markets-edce1f01987c
['Foot.Notes Footprint Coalition']
2021-08-31 19:31:46.559000+00:00
['Climate Change', 'Sustainability', 'Spacex', 'Electric Vehicles', 'Technology']