Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
800
Poor Rudolph
Cartoonist. Former Dave Letterman joke writer. Yinzer. Dad of two girls (one a T1Der). Beer drinker. Pizza lover. He/him.
https://medium.com/@chumworth/poor-rudolph-456f468f3bb5
['Phil Johnson']
2020-12-24 17:23:16.878000+00:00
['Business', 'Comics', 'Cartoon', 'Humor', 'Technology']
801
Tether Added to Crystal Analytics
Tether Added to Crystal Analytics The Crystal Blockchain platform has just been updated to include Tether (USDT), along with several other new features. 1. Tether Added to Our Platform We have now added Tether (USDT) to Crystal Blockchain analytics! Tether is one of the top 10 cryptocurrencies, with a market cap of $4.5b+. Tether support will be available as part of the Bitcoin (BTC) interface on the platform — it has been extended to show USDT transfers. It is important to note that this addition will mean a few changes in your platform functions, including: Explorer: On the entity/ address page, it is now possible to check the amount of assets received (in BTC or USDT), as well as the entity/ address balance in fiat (USD or Euro). Both types of transfers (BTC and USDT) are now considered when calculating a risk score, making our overall profiling capacities much stronger. We have extended the Connections feature for USDT, so users can now choose from five different currencies. Visualization: Users will now also be able to build visualizations with the USDT layer. Our Monitor feature now enables users to add transfers in USDT or BTC, so the risk can be assessed appropriately depending on the currency they choose. In Cases, we have added the ability to set up notifications in USDT, as well as in BTC. Other Updates: 2. New Analytics Tool: Shortest Path In the list of All Connections, you will see a new option — “shortest path.” This option will show you shortest transaction path to any named entity from the Connections list. Users will also be able to open a visualization of a path to the entity in a single click. 3. Site Status Users will be able to check whether all cryptocurrency support systems are operational (eg. due to maintenance or power outages) using the link status.crystalblockchain.com. 4. Nomenclature Change: “Fiat” instead of “USD” We have also updated our API endpoints. The term “USD” in the current endpoints has been replaced with the term “fiat”. Data is now displayed based on the fiat currency pre-selected in your profile. (Note: these endpoint term changes only affect those who use Crystal via API.)
https://medium.com/meetbitfury/tether-added-to-crystal-analytics-a813b4534a1a
['Crystal Blockchain']
2020-02-19 13:17:33.288000+00:00
['Blockchain', 'Tether', 'Technology', 'Crystal Blockchain', 'Cryptocurrency']
802
Tesla: Our Next-Gen Battery Tech Will Help Create a $25,000 Electric Car
Tesla plans on incorporating the new battery tech and manufacturing methods in 12 to 18 months with full utilization in three years. By Michael Kan To one day create a $25,000 electric car, Tesla says it’s come up with a way to reduce the costs to building its batteries by more than 50 percent. At the company’s Battery Day event, Tesla CEO Elon Musk went over a whole series of changes the automaker plans on making that promise to streamline vehicle manufacturing and the battery cells themselves. The results can not only help Tesla save on costs, but also increase the range of its electric cars by as much as 54 percent. Among the changes are a new battery, the 4680, which Tesla says can offer five times the energy capacity and six times the power over the company’s existing batteries. Credit: Tesla Tesla plans on incorporating the new tech and manufacturing methods in 12 to 18 months with full utilization in three years, according Musk. Once in place, the company will be able to pump out electric batteries at a higher volume and at a lower cost, enabling the automaker to bring millions of more vehicles to market-including less expensive models. Credit: Tesla “I think probably about three years from now we can confidently make a very compelling $25,000 electric vehicle that’s also fully autonomous,” Musk said. (For perspective, the cheapest Tesla car, the Model 3, currently goes for $37,790.) The newly announced cost improvements come not from a single technological breakthrough, but from numerous changes made across the entire manufacturing chain and how the batteries themselves are constructed. Credit: Tesla For example, one change involves constructing a Tesla car from a single piece of metal so that the electric battery can be more densely packed inside. Another improvement involves the company setting up its own cathode production facility by directly extracting the lithium from a field in Nevada. Credit: Tesla “Tesla will absolutely be head and shoulders above anyone else in (electric car) manufacturing. That is our goal,” Musk said. On the flip side, it’ll take some time for Tesla to iron out and build out the new manufacturing technologies. “If we could do this instantly we would,” he added. “But I think it really bodes well for the future. The long-term scaling of Tesla and the sustainable energy products we make will be massively increased.” At the event, Musk also gave an update on Autopilot, the driver-assist feature on Tesla vehicles. The company plans on releasing a beta update to Autopilot in “a month or so” that will allow for a fully self-driving Tesla, which Musk has long promised. He was sparse on the details, but said his company overhauled the software behind the system to better identify 3D objects recorded with the car’s sensors.
https://medium.com/pcmag-access/tesla-our-next-gen-battery-tech-will-help-create-a-25-000-electric-car-3502947fb2bb
[]
2020-09-23 13:44:11.860000+00:00
['Battery', 'Future Technology', 'Tesla', 'Electric Car', 'Technology']
803
Blockchain: Unlocking Potential for Supply Chains
Originally published in SustainAbility’s Issue 22 of Radar. As a system that can track products from source to shelf, blockchain can be used to improve supply chain transparency. Over the past five years, numerous uses have begun to emerge. As a means of tackling conflict diamonds, IBM’s TrustChain collaboration has united actors across the value chain to track and authenticate diamonds, precious metals and jewellery to guarantee their origin. In the automotive sector, Volkswagen and Minespider recently announced a pilot project to achieve end-to-end transparency in the global lead supply chain. Meanwhile, Bext360 has been working to ensure that coffee farmers get a fair price, paid instantly, for their beans. Across the numerous pilot projects of the past decade, blockchain is showing enormous potential for securing high-risk commodities like conflict minerals, wood, cotton and coffee. Across the numerous pilot projects of the past decade, blockchain is showing enormous potential for securing high-risk commodities like conflict minerals, wood, cotton and coffee. Yet companies are still failing to rollout scalable blockchain solutions. Michael Casey, co-author of The Truth Machine, notes that “[a]cross the board, actual productive use of blockchain for day-to-day business operations is still extremely thin.” Although it is widely acknowledged that blockchain could be transformative for supply chains, it is clear that there are critical barriers facing its meaningful deployment. Our analysis of over 30 pilot projects revealed three core themes that need to be addressed before we can see blockchain’s potential unlocked. Putting the cart before the horse Provided that it has some sort of digital or physical identifier, most products passing through a supply chain can be registered on a blockchain. However, in order to set up a blockchain, companies need to have clear visibility and control over all tiers of their suppliers. Every farmer, distributor, packager, and other supply chain actor must be known by the company and willing to participate. It is unsurprising that companies like Starbucks or Walmart, who have both the purchasing power and sophistication of supply chain management to fully engage their supply chains, have been the first to pilot blockchain solutions. For most companies, the reality is that supply chains remain murky. For many companies in the coffee sector for example, coffee is bought indirectly from intermediaries who create economies of scale but give buyers little visibility and control over from where products are sourced. Deploying a technology like blockchain will require companies to begin engaging more deeply and directly with their suppliers. Creating pathways to access Even with a clear view of the supply chain, implementation will still be impeded by financial barriers. Especially for commodity suppliers like those in coffee supply chains, technological solutions are prohibitively costly. Twenty-five million smallholder farmers produce 80% of the world’s coffee, yet on average they make less than $2 per day. Companies will need to play a key role in helping their suppliers access the technology, internet connections and digital literacy required for adoption. Some companies are already beginning to tackle this issue. For example GrainChain, a blockchain powered commodities trading platform, has helped farmers access their tools through microloans that can be issued by banks to online digital wallets held by the growers. Companies will need to play a key role in helping their suppliers access the technology, internet connections and digital literacy required for adoption. Supporting suppliers through this process will require long-term and engaged relationships from both parties. The benefits of this will be felt by both the business and their suppliers, as security of demand will help suppliers achieve the financial stability needed to improve their livelihoods. From competitive to collaborative advantage Inevitably, the burden of equipping entire supply chains with blockchain-ready technologies is prohibitively expensive for any one company. Yet most companies continue to take a siloed approach that aims at building competitive advantage as much as transparency. Given that many businesses share significant numbers of suppliers with their competitors, there is a strong case for collaboration in blockchain deployment. The IBM-Maersk shipping consortium has already established the TradeLens platform to harness these synergies in the logistics sector. However, there is a need for greater cooperation between companies in other sectors to equip their mutual supply chains with the right tools for transparency.The results could be transformative for the uptake of blockchain systems given that, as in all networks, the utility of the system is directly proportional to its scale. There is a need for greater cooperation between companies in other sectors to equip their mutual supply chains with the right tools for transparency. These three themes underpin the gap between the buzz that blockchain has created and its effective real-world applications. As we explored in issue 16 of Radar (Blockchain, Foundational not Disruptive), blockchain is a tool rather than a solution. Speaking recently with the World Economic Forum, Catherine Mulligan of GovTech Lab and UCL’s DataNet emphasised the importance of blockchain not being viewed as an end in itself but instead “engaging with other problems around the edges before we turn to digital technologies.” In supply chains where defining provenance is a critical issue, the irrefutable transparency that blockchain offers appears to be a genuinely effective use case. High-risk commodities will be a natural place to focus efforts as blockchain continues to develop. Companies that are faced with these issues now need to take a more considered approach to its implementation to unlock its potential. They will need to work harder to understand their supply chains and collaboratively support them in transitioning to the digital economy. If they can do this, we will begin to unlock the sustainability benefits that blockchain can offer.
https://medium.com/@nictheath/blockchain-unlocking-potential-for-supply-chains-9e07057f3609
['Nicolas Heath']
2020-04-12 13:20:19.699000+00:00
['Sustainability', 'Technology', 'Supply Chain', 'Blockchain', 'Transparency']
804
Playing It Safe. There Is No Other Way to Launch Self-Driving Cars
Playing It Safe. There Is No Other Way to Launch Self-Driving Cars By Scott Griffith, CEO, Ford Autonomous Vehicles LLC and Mobility Businesses A Ford self-driving test vehicle stops to allow pedestrians to pass. Over the last several decades, Ford and the rest of the auto industry have spent a vast amount of resources developing robust, comprehensive processes to ensure we design and deploy safe vehicles, because we care about the safety of our customers. That same mindset dominates how Ford approaches the introduction of self-driving cars. We believe that self-driving vehicles present an opportunity to help improve safety on our streets by reducing some part of the roughly 94 percent of crashes that the National Highway Traffic Safety Administration estimates are due to human recognition or performance errors. The agency found there were more than 2,800 fatalities and an estimated additional 400,000 people injured due to motor vehicle crashes involving distracted drivers. Our hope is that self-driving cars can help improve this situation as well. Of course, there’s a very obvious distinction between vehicles with humans at the helm and those without. Vehicles driven by humans have been fine-tuned and enhanced over time to help improve driver behavior, but the challenge for self-driving cars will be to manage all driving operations on their own — making decisions and performing maneuvers to navigate through numerous scenarios. Just as decades of experience have given us safe and reliable development processes for human-driven cars, we need to draw upon that experience and develop the same processes for self-driving ones. That’s why we’re so excited to have Chris Gerdes assist the team at Ford Autonomous Vehicles LLC. In his new position as safety advisor to Ford, Chris will play a meaningful role in the creation of these safety processes — the very processes that will help us understand when we can be confident that our self-driving vehicles are safe and ready for deployment. Chris Gerdes of Stanford University joins Ford as a self-driving vehicle safety advisor. Chris is uniquely positioned to help guide Ford in its self-driving vehicle development efforts. As the co-director at the Center of Automotive Research at Stanford University, his laboratory studies how cars move, how humans drive and how to design future vehicles that can drive themselves. In addition to his expertise in mechanical engineering and his continued research at Stanford, Chris served as the U.S. Department of Transportation’s first chief innovation officer in 2016, and was part of the team that drafted the country’s first federal automated vehicle guidelines. At his Stanford lab, Chris and his students explore how to develop cars that can avoid collisions, where possible through the laws of physics. This research could result in significantly different control systems in self-driving cars than we’d find in human-driven vehicles. For example, modern stability control systems may restrict how a vehicle manages an evasive lane change maneuver so the average driver avoids spinning out, but at the same time they limit some maneuvers that expert human drivers can perform to avoid collisions. Would altering or removing those control systems give self-driving vehicles more capability to make decisions and improve their maneuverability? That’s one area we’re aiming to improve our understanding — learning lessons from the very best human drivers and turning those lessons into algorithms that can be deployed on self-driving cars. That way we can ensure that if there is an unexpectedly icy road or a pedestrian that suddenly steps out into traffic, the car can very reliably use all the friction on the road to help move out of harm’s way. As Ford’s safety advisor, Chris will work closely with Ford’s Government Affairs, Automotive Safety and legal groups, as well as the Autonomous Vehicle System Engineering (AVSE) team. Together, Argo AI and our AVSE teams collaborate to draw upon the respective strengths and experiences of each team, with the goal of creating a safe and reliable self-driving system. There are challenging situations drivers face every day and the team needs to work through various scenarios to determine the logic for how we want the self-driving vehicle to handle similar situations. Like those of us at Ford, Chris believes strongly in the impact self-driving vehicles can have on safety and transportation accessibility. Ensuring self-driving cars with no brake pedals or steering wheels cars are safe, trusted and reliable will be the defining challenge of the auto industry. The technology is still in its formative phase, but we have the opportunity to develop safe processes that can help us harness its power to improve our customers’ lives. Having the benefit of Chris’ background, expertise and guidance, Ford will be even more prepared to tackle that challenge.
https://medium.com/self-driven/playing-it-safe-there-is-no-other-way-to-launch-self-driving-cars-254a6ff5ee6c
['Ford Motor Company']
2020-11-10 17:02:29.973000+00:00
['Safety', 'Automation', 'Autonomous Cars', 'Technology', 'Self Driving Cars']
805
SBI Specialist Officer Online Form 2020
HI ,This is Sarkari Select Job Website For Immediate Notification Available On This Website. All Current Job Notification Details Available Here. Your Welcome
https://medium.com/@sarkariselectjob/sbi-specialist-officer-online-form-2020-9704f383d0dc
['Sarkari Select']
2020-12-26 14:32:15.662000+00:00
['Sbi So Exam', 'Banks', 'Banking Technology', 'Banking', 'Ibps']
806
I made my PC work like Minority Report
A while ago, I was watching YouTube while eating (which I always do) and tried to skip past a video to another one, but couldn’t because of my food-covered hands. I tried in vain to swipe in mid-air hoping for something to happen (nothing did), when I realized with my camera and some AI, I could control my mouse just by waving my hand. Thus, I created an app that makes it possible to control your computer with hand gestures. Now, instead of continuously holding down the volume key or repeatedly flicking your trackpad, I control the scroll or volume by moving my hand up or down, for example. I can highlight and copy parts of articles and paste them with a simple hand motion. I can even draw without a stylus or my mouse! Here’s the technical side of what I did: I implemented a point teleportation system for mouse motion, where the mouse would teleport wherever my hand was in relation to the computer screen. Amount of time with my hand in mid-air drastically decreased (after some training to get myself calibrated — it took about 30 minutes to master where my hand was supposed to be to teleport to a location). Many of my users started using it for the same purpose, and the ability to scroll and control the volume was a big hit. I think it could be used as a platform for further development for PC/Mac/Linux users, like developing command line shortcuts that a keyboard can’t do, etc. It could also be used as a platform for VR gaming, and get rid of those controllers too. It could also work in tablets for airports and kiosks to avoid contact, especially in the middle of a pandemic. I have made it available for Windows and MacOS, but I’ll be releasing versions for Linux in the next few months! If you have any questions, or would like to request new features and actions, you can reach out to me directly from the contact section of the site. Visit the Windows Store to try it out yourself, or get the Mac version directly from the website!
https://medium.com/@orders-81716/i-made-my-pc-work-like-minority-report-f2969e61674c
[]
2020-12-24 20:38:27.221000+00:00
['Apps', 'Technology', 'AR', 'Apple', 'AI']
807
Time to abandon SwiftyJSON and use JSONDecoder
1. JSON Parsing Is Part of the Foundation Photo by Mirko Blicke on Unsplash. There are handfuls of well-known projects dealing with JSON parsing employing various approaches and philosophies. SwiftyJSON is probably the earliest and most popular one among them. It’s less verbose and error-prone and leverages Swift’s powerful type system to handle all of the details. JSONSerialization is the core of most JSON parsing projects. It comes from Swift’s Foundation framework and converts JSON into different Swift data types. People used raw JSONSerialization to parse JSON objects before projects like SwiftyJSON came up. But it can be painful, as the value type and JSON structure may vary. You need to manually handle errors and cast the type Any to Swift foundation types. It’s more error-prone.
https://medium.com/better-programming/time-to-abandon-swiftyjson-switch-jsondecoder-codable-407f9988daec
['Eric Yang']
2020-07-03 00:45:48.726000+00:00
['Technology', 'iOS', 'Xcode', 'Swift', 'Programming']
808
Smart cities: can they be hacked?
The ultimate takeover of a country could be performed by getting inside the technologies of a nation. It’s something that you are more likely to see on the silver screen than on the news. But could smart cities being hacked move from fiction to reality? Why do we need smart cities? Some would argue that it’s not needed — that too much technology loses the impact of person to person contact. Others believe that technology can be used to help people live healthier, happier lives. Of course, it depends on the style of living that you’d like to get accustomed to. Some people prefer rural living, and that’s unlikely to be affected by smart city changes any time soon because the focus of these technologies is in large-population sites. What are smart city solutions? Before you start to think about smart city solutions, you first need to understand the problems. What are you trying to solve? How are you trying to make people’s lives better? What can we be doing to make things better/easier for people? Our client Exeter City Futures, for example, has focused its attention on sustainability and inclusion, choosing to support and invest their resources in technologies developed locally that solve carbon and global warming challenges. Different cities will have different priorities, once they identify the challenges they are trying to solve. Will you focus your resources on the elderly or the very young? Are certain areas with more deprivation going to need more basic investment before we start thinking about smart city technologies? What are smart city technologies? Once you have a clear idea of the problems you want to solve, you can start looking at smart city technologies, which typically come into two categories: city-wide, or at home. On the city-wide scale, there is almost too much choice. From pavements that generate electricity when people walk on them, to lamp posts with sensors that monitor pollution, from smart bins that measure their filling rate, to smarter networks. Some cities are already implementing these city-wide technologies. In Bristol, UK, a partnership between two of our clients, Bristol is Open and Zeetta Networks, as part of the DCMS 5G-SmartTourism trial, installed sensors around the harbourside on the 5G network to ensure that anyone who fell into the water could be rescued, preventing deaths. At the other end of the scale are the home smart city technologies, and you may already have some of these installed in your own home. These include smart doorbells, smart fridges, technology that allows you to feed your pets remotely, and more. Can smart cities be hacked? Citywide technologies are usually not hackable because there is strong security put in place from the outset which is updated frequently, reducing the risk. But home-based smart city technologies are more easily hackable, especially if people don’t update the security of those devices when you buy them. Most of the time, these devices arrive with passwords as simple as ‘password’ or the device name, which can be dangerous for those using smart devices at home and at work. The danger only increases when you realise that people who use those devices then move into a smart city where they are connecting with many other devices. It only takes one weak link to cause a problem, something that our client BlackDice points out regularly.
https://medium.com/@oggadoon/smart-cities-can-they-be-hacked-e284edc4b471
['Oggadoon Digital Marketing']
2020-06-08 14:16:47.681000+00:00
['Smart Cities', 'Technology', 'Hacking', 'Future', 'Development']
809
My first time on Medium
“I don’t think this will work out”, I said to myself when I first thought about writing my own blog. But that’s not going to stop me from doing it. Hi. My name is Sundaresan. I’m a college student pursuing my Bachelors in Electronics and Communication Engineering. I love technology in general and my particular field of interest is Cyber Security. I’m starting this blog page with the hopes of expressing my knowledge and feelings towards both these topics, to others. Don’t worry, I won’t bore you with complicated cryptography or some gadget wizardry. In fact, I have just started learning them too. I’ll give simple explanations and reviews that even someone relatively new to the field can understand. I’ll document and write a blog on each and everything I learn, from beginner to expert (hopefully). I invite you on a journey with me. I don’t know where I’m going. I can’t tell you if that place is good or bad. But I can promise you this, that the journey we’ll be taking together, it will be beautiful. So yeah! That’s it about me. Tell me about yourself in the comments and give the posts a clap if you like them. Have a nice day!
https://medium.com/@iamsila/my-first-time-on-medium-6cd2be35d3b2
['Sundaresan Chettiyar']
2021-01-18 18:24:01.201000+00:00
['Technology', 'About Me', 'First Post', 'Introduction', 'Cybersecurity']
810
The first App, “Pilly Patch”
In this text we will talk about economy, productivity, pollution and how to fight it thanks to an effective system that also uses blockchain technology. We will talk about the first piece inserted in the Krium infrastructure capable of storing services that can be used by the mass and able to satisfy real needs. All this concerns “Pilly Patch”, the first mobile app developed by Krium team. Modern life seems increasingly characterized by exchanges and social interactions aimed at buying goods over the internet. Contemporary society is therefore increasingly inclined to use new technologies to get real economic benefits, combined with a greater choice of useful consumer goods. Pilly Patch has been developed on these thesis and the design of its operating structure is backed by real usage analysis, and its functions are the result of the researches on contemporary problems and it propose to be a valid scenario for those that are going to approach to and/or increase, in a scalable way, the sales. Pilly Patch use high and new levels of logical technical development which are listed below and will be analyzed in detail later. Pilly Patch project is the mobile app specifically made for the advertising business 4.0. This system allow anyone who creates sponsorship promotion campaigns to gain maximum exposure. Now, we will analyse, through an example, real usage case scenario where Pilly Patch app can be used effectively target marketing “Scenario for Start-Up”: A young start-up team designed and created some innovative skateboards, targeting this market segment that, actually, is the skateboarders audience. The start-up team planning a marketing campaign could use two different strategies: the first is creating, by a graphic designer, some posters specially designed for being delivered. The brand new start-up team should, then, hire staff whose main task will be to physically deliver those posters at the targeted location, where, actually, is located the audience interested in the product. We would like, now, to focus on three main disadvantages of this system: The first disadvantages concerns the economic burden; the team will have to pay two human resources, who designes the poster and who actually deliver it. The second disadvantage concerns the time; in order for the paper leaflet to be effective, you have to run the marketing campaign for an extended period of time. The third disadvantages concerns the environmental impact of waste of papers. The second strategy concerns the use of modern technologies whose problems has been already analysed in the White paper on the website. If the young start-up team used our system, could save time and money, (contributing at the same time to the environmental benefit) because it would be able to deliver virtual posters at the targeted audience locations, (Skateboard parks of any city of the world). The team consequently saves money, used for recruiting staff, that can be invested for other uses. Through this system you can deliver ads in any part of the world. Let’s analyse step by step how the system works: The start-up team, during the registration, chooses the sector it will operate in (clothing, sports, electronics…). This operation filters the research and allows the system to suggest users to most effective area where they should send their campaigns. Subsequently, it sets up location and targeting for a specific area of the city through the maps section (geo-localized sponsorship system). In this specific case, the skateboard parks of the area will be selected. The system now provides the team with a list of all the people who actually use the app in real time, in the area (real time ADV system). We now proceed to the creation of the “digital poster”. Created the poster, you decide whether discounts would work well for certain product for sale or if advertising will just allow to disclose to the user prices and features of the goods. When you decide to send the announcement to the system, the Pilly Patch app retains a small commission for the service, the rest of the amount, which is reserved to invest in the advertising campaign, will be divided in equal amount to those who click, open and visualize the AD for X amount of seconds (users cash income rewards system). This innovative mechanism offers incentives to open and actually visualise the AD. The team can choose between two types of sponsorships: flash ADV system Programmed ADV system The flash ADV method allows to deliver a flash sponsorships exclusively to the audience located at that moment in the area selected by those who run an AD campaign. The timer can not be programmed. Therefore, in this case the sponsorship is single-use. The programmed ADV method allow to select the total time of ADV release, using a timer. All those who keep their devices geo signal turned on, phisically entering in the predetermined area will receive the AD. In this latter case, when the team runs out of funds, the marketing campaign ends up running, if the estimated time of completions ends, the promo campaign ends and the amount of money that hasn’t been returns to the marketers account. Value proposition: increase the sales of those who use a marketing campaign offer an economic and effective services providing environmental benefits reducing paper leafleting allow anyone to earn by opening an AD in the app allow anyone to use the gain made to run sponsorships campaigns reduce the wasting of time on an AD campaigns to let anyone to sponsor any good, service or information “Pilly Patch, the blockchain social network for advertising!” You can find all the details in the white paper on the site … follow us to stay informed! Twitter: https://twitter.com/Kriumproject E-mail: [email protected] Discord:https://discord.gg/52WymhV Github: https://github.com/kriumproject Official website:https://krium.net/index.html
https://medium.com/@kriumproject/the-first-app-pilly-patch-986792254c33
['Krium - Project']
2019-04-12 09:43:43.583000+00:00
['Startup', 'Marketing', 'Business', 'Technology', 'Innovation']
811
What is Object-Oriented Programming?
Object-oriented programming is the transfer of the objects around us to the computer environment. For example, being able to monitor our household items with a computer and operating them remotely are examples of object-oriented programming. What is the object? They are components that contain methods that store, manage and process data. They can be used unchanged and only take up memory space. Properties of Object-Oriented Programming Object-oriented programming includes 4 distinct features: Abstraction Encapsulation Inheritance Polymorphism Abstraction: Since each object has its own class, defining the behaviors and properties in a class is abstraction. For example, there are certain classes of white appliances and there are special colors, features and models for these classes. Encapsulation: Abstracted behaviors and properties are encapsulated by an object-oriented program. Together with the encapsulation feature, it is decided which feature or behavior will be presented or not. For example, personal data is encapsulated and the capsule is left open because it is used with object-oriented programming. The storage of this information is called encapsulation. To read more, you can visit https://letsbecool.com/what-is-object-oriented-programming/.
https://medium.com/cool-digital-solutions/what-is-object-oriented-programming-3b5b185333f5
['Cool Digital Solutions']
2020-12-18 11:10:19.195000+00:00
['Information Technology', 'Software Development', 'Software', 'Oop', 'Programming']
812
My Dad Deleted My Novel
My Dad Deleted My Novel A nostalgic and cautionary tale about backing up your work. Photo by Fredy Jacob on Unsplash I started writing a novel when I was thirteen. The year was 2001, and I diligently typed away on the beige family desktop each evening, headphones feeding me a constant stream of carefully curated midi files and Clippy insisting I was probably writing a letter and needed assistance. Floppies littered the desk and magnets were strictly banned from coming anywhere near this slovenly behemoth and its tangled nest of wires. I blocked any and all distractions with my cleverly worded AIM away message, one of several for every occasion. (Also, if it sounded like that last paragraph was in another language, congrats on your functional joints. I’m jealous.) An inspiring start It was an epic fantasy about four magical girls that could transform into dragons. There was an anthropomorphic black cat that helped them, and an evil bad guy to be overthrown. There was lore, the genocide of the cat’s species, and bloodlines. Artifacts abound, and each girl had their own unique dragon form (including eastern-style dragons). It was my opus, my pride and joy, nestled in the confines of the C: drive. My dad, meanwhile, was doggedly trying to switch from a lifetime of construction work to the blossoming world of IT. He always enthusiastically supported my creative endeavors, from music to art and everything and anything in between, but he also was keen for me to learn C++ alongside him (I did not). He was also a keen early adopter of whatever new technology we could afford (very little). So that meant he was confined to experimenting with the family desktop instead, teaching himself to program on it and learning how the hardware worked. I’m sure you can see where this is going. A dream disappeared Just like most evenings, I logged onto the desktop, ready to get through another chapter. But something was wrong. The midi files I had downloaded were gone. Still, that was just music. But the Word document was gone too. Cue my heart starting to pound. Months of work, MIA. I searched everywhere, checked the Recycle Bin, then double checked while forcing myself to take deep breaths. When nothing turned up, I went to my dad, blinking back tears, and asked where he’d put my files. He seemed confused, and I explained that my novel was nowhere to be found. Confusion coupled with shock on his face; he didn’t know I was writing a novel. He then went on to explain, with guilt knitting his brow, that he reformatted the hard drive. Everything was gone. We were both distraught, as he had not backed up anything of mine onto a floppy disk. I didn’t have notes on the story, I hadn’t sent it to anyone, had not printed it, or otherwise created any copies. It was around sixty percent finished, and now there was nothing left of it. My dad, to his credit, apologized profusely. As I mentioned before, he had always supported my creative pursuits more than anyone, and deleting my novel had hit him hard. Hard enough, in fact, that shortly after, he got a new computer, and gave me the old family desktop. From that point on, I always had my own computer, and thus total control over my own data. Alas, I never wrote another word of that novel. The lessons learned While those magical girls and their trusty cat remain relegated to the fringes of my memory, I did learn a few lessons from this little disaster:
https://ashandfeather.medium.com/my-dad-deleted-my-novel-41cc7d4de498
['Ash']
2020-06-22 20:46:19.936000+00:00
['Technology', 'Creative Writing', 'Family', 'Writing Advice', 'Fathers']
813
Top JavaScript Frameworks and Tech Trends for 2021
Happy New Year! It’s time to review the big trends in JavaScript and technology in 2020 and consider our momentum going into 2021. Our aim is to highlight the learning topics and technologies with the highest potential job ROI. This is not about which ones are best, but which ones have the most potential to land you (or keep you in) a great job in 2021. We’ll also look at some larger tech trends towards the end. Language Rankings JavaScript still reigns supreme on GitHub and Stack Overflow. Tip #1: Learn JavaScript, and in particular, learn functional programming in JavaScript. Most of JavaScript’s top frameworks, including React, Redux, Lodash, and Ramda, are grounded in functional programming concepts. TypeScript jumped past PHP, and C# into 4th place, behind only Java, Python, and JavaScript. Python climbed past Java for 2nd place, perhaps on the strength of the rapidly climbing interest in AI and the PyTorch library for GPU-accelerated dynamic, deep neural networks, which makes experimentation with network structures easier and faster. Source: GitHub State of the Octoverse, 2020 JavaScript is also #1 on Stack Overflow for the 8th year in a row. Python, Java, C#, PHP, and TypeScript beat out languages like C++, C, Go, Kotlin, and Ruby. Frameworks When it comes to front-end frameworks, a large majority of JavaScript developers use React, Vue.js, or Angular. jQuery still makes a surprisingly large showing, almost double the Vue.js showings, but it’s my guess that jQuery is used less in application work, and more in content sites and WordPress templates, so we’re going to exclude it this year. Search Volume React dominates search volume at 57.5%, with Angular collecting a large 31.5% share, and Vue.js picking up a respectable 11% slice. *Methodology: All search trends were selected by topic rather than by keyword to exclude false positives. Jobs If you want to learn the framework that will give you the best odds of landing a job in 2021, your best bet is still React, and has been since 2017. React is mentioned in 47.6% of the listings which mention a common front-end framework, Angular picks up 41.2%, and Vue.js trails at 11.2%. It’s important to mention that most job listings say that they require experience with one of a few named frameworks, but a large share of those listings are actually hiring for React work when you look at their listed tech stack, and will show preference to candidates with a strong knowledge of React. You’ll see some supporting evidence of that in the download trends, below. *Methodology: Job searches were conducted on Indeed.com. To weed out false positives, I paired searches with the keyword “software” to strengthen the chance of relevance. I also omitted the “.js” from “Vue.js” because many listings don’t include the “.js”. All SERPS were sorted by date and spot checked for relevance. Downloads The npm download counts look fairly similar to the search trends, but reveal something interesting: The number of downloads for Angular 2+ and Vue.js are pretty much neck-and-neck, but if you add in the number of people using the old Angular framework, Angular has a solid lead over Vue.js in downloads. If we look at recent download shares on a pie chart, it shows React at ~66%, Angular (all versions) at ~20%, and Vue at ~15%. TypeScript vs JavaScript 10.6% of employers specifically mention TypeScript in job listings, up from 7.4% last year. Developer interest in TypeScript is undeniably strong, and growing rapidly. I predict that this trend will continue in 2021, and users will learn to work around some of the costs of using TypeScript (for example, by favoring interfaces over inline type annotations). The number of jobs that specifically mention TypeScript is still relatively small, but some experience with TypeScript will slightly increase your odds of landing a job in 2021. By 2022, some experience with TypeScript might give you an edge in the job market. However, because it’s easier for a JavaScript developer to learn TypeScript than a completely new language, TypeScript teams are usually willing to hire and train good JavaScript developers. Server Frameworks On the server side, Express still dominates in download counts, so much so that it’s difficult to see how popular contenders are doing relative to each other. As I predicted last year, excluding express, we see that Next.js has emerged as the top contender, which is unsurprising because Next.js is a flexible, full-stack, React-based framework which can help you deliver statically optimized content, but can also fall-back on serverless functions for API routes and SSR when you need to generate content dynamically. You can even statically generate content on-demand the first time it’s requested, and subsequently serve cached static content served from CDN — useful for apps based on user-generated content. Next has many other advantages, including automatic optimization of page bundles, automatic image optimization with the new Image tag and built-in performance analytics to help you improve your user’s page load experience. If you use GitHub and deploy on Vercel, you’ll also get automatic deploys for every PR, and a buttery smooth CI/CD pipeline. Essentially, it’s like having the best full-time DevOps team on staff, but instead of paying them salaries, you save a significant amount of money in hosting bills. Expect Next.js to continue to explode in 2021. Remote Work Trends In 2020, teams were forced to learn to collaborate remotely by a global pandemic. In 2021, remote work will continue to be an important topic. First, because it will probably be June before vaccination against COVID-19 is widespread, and second, because a lot of teams experienced increased productivity and reduced costs during lockdown, many employees will not return to offices in 2021. Remote work has also led to more location freedom, prompting developers to move to places where they have access to things that are important to them, such as family and more affordable housing. Additionally, 72% of employers surveyed by KPMG said that remote work has widened their potential talent pool. Remote-first and hybrid-remote teams will be the new normal in the new decade. Average JavaScript Developer salaries dipped slightly in 2020, from $114k/year to $113k/year, according to Indeed, perhaps due in part to remote work expanding the employee pool beyond tech centers like San Francisco and New York, which tend to have a much higher cost of living, and demand higher salaries to compensate. The average JavaScript Developer salary in San Francisco is $130k. Still, lots of companies with roots in San Francisco and other tech centers are paying remote workers somewhere between the US national average and San Francisco pay, which provides a premium on market rates to attract better talent, and still saves money over hiring locally and paying for office space. Because of this trend, lots of remote jobs exist in the $115k — $130k range for mid-level developers. Senior developers often find jobs in the $120k — $150k range, regardless of location. GitHub data suggests that rather than slowing down, teams were more productive working remotely in 2020. GitHub activity spiked when lockdowns began. Source: GitHub State of the Octoverse, 2020 Volume of work on GitHub increased substantially, and average pull request merge times dropped by 7.5 hours. Toss that onto the growing pile of evidence that remote work works. Passwords are Obsolete Passwords are obsolete, insecure technology and absolutely should not be used to protect your users or your app in 2021. The crux of the matter is that about half of all users reuse passwords on multiple applications and websites, and attackers are financially incentivized to bring massive computing power to the problem of cracking your user’s passwords so they can try them on bank accounts, Amazon, etc. If you’re not Google, Microsoft, or Amazon, chances are you can’t afford the computing power required to defend against modern password crackers. Don’t believe me? Check out HaveIBeenPwned. Spoiler: If you’ve used the internet, your passwords have been stolen. I’ve been warning about the dangers of passwords for years, but in 2020, new options emerged which allow us to leave passwords behind, permanently. It was true in 2020, and it remains true: No new app should use passwords in 2021. But once you leave passwords behind in exchange for cryptographic key pairs, your app also gains Web3 superpowers. Which leads me to the next topic: Crypto. Crypto Crypto will continue to be one of the most important and globally transformational technologies in 2021. Here are some highlights from 2020: Bitcoin exploded to new all time highs, thanks in part to notable support from companies like PayPal. Expect more of the same in 2021. to new all time highs, thanks in part to notable support from companies like PayPal. Expect more of the same in 2021. Ethereum 2.0 beacon chain launched , which lays the groundwork for Ethereum to become a much more scalable platform. Additionally, scalability solutions such as side-chains and zkRollups gained momentum in 2020. Expect to see more DApps (Decentralized Apps) integrate those scaling solutions in 2021. which lays the groundwork for Ethereum to become a much more scalable platform. Additionally, scalability solutions such as side-chains and zkRollups gained momentum in 2020. Expect to see more DApps (Decentralized Apps) integrate those scaling solutions in 2021. DeFi (Decentralized Finanance) is now a $15 billion market (up from $650 million when I wrote last year’s edition of this post), mostly operating on the Ethereum blockchain. Many multi-million-dollar exploits plagued the DeFi ecosystem in 2020. Smart contract security will continue to be a hot topic and huge opportunity in 2021. (up from $650 million when I wrote last year’s edition of this post), mostly operating on the Ethereum blockchain. Many multi-million-dollar exploits plagued the DeFi ecosystem in 2020. Smart contract security will continue to be a hot topic and huge opportunity in 2021. Non-Fungible Tokens (NFTs) gained momentum in 2020, with several high profile sales of single tokens priced in the tens of thousands of dollars, each. Rarible introduced their own community token and began to airdrop it to marketplace users, fueling increased volume. Millions of dollar’s worth of NFTs are bought and sold daily, but this is just the beginning. Because they can represent virtually anything of value, the total addressable market is in the $trillions. in 2020, with several high profile sales of single tokens priced in the tens of thousands of dollars, each. Rarible introduced their own community token and began to airdrop it to marketplace users, fueling increased volume. Millions of dollar’s worth of NFTs are bought and sold daily, but this is just the beginning. Because they can represent virtually anything of value, the total addressable market is in the $trillions. The Flow blockchain launched and brought with it lots of promise for mainstream blockchain adoption. NBA Top Shot has sold over $6 million in NBA-branded NFT moments, which represent short video clips of key moments in NBA games. and brought with it lots of promise for mainstream blockchain adoption. in NBA-branded NFT moments, which represent short video clips of key moments in NBA games. Theta Network launched smart contracts and NFTs. Among other things, NFTs will be used for stickers and badges on Theta.tv, a decentralized alternative to Twitch with millions of monthly active users. Artificial Intelligence (AI) 2020 was a seminal year for AI. Via the GPT-3 launch, we learned that language models and transformers in general may be a viable path towards Artificial General Intelligence (AGI). The human mind’s ability to generally solve a wide variety of problems by relating them to things we already know is known in AI circles as zero-shot and few-shot learning. We don’t need a lot of instruction or examples to take on tasks that are new to us. We can often figure out new kinds of problems with just a few (or no) examples (shots). That general applicability of human cognitive skills is known as general intelligence. In AI, Artificial General Intelligence (AGI) is “the hypothetical intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can.” GPT-3 demonstrated that it could teach itself math, how to code, how to translate text, and a virtually infinite variety of other skills via its gigantic training set which includes basically the whole public web (Common Crawl, WebText2, Books1, Books2, and Wikipedia), combined with its enormous model size. GPT-3 uses 175 billion parameters. For context, that’s an order of magnitude (10x) the previous state of the art, but still orders of magnitude smaller than the human brain. Scaling up GPT-3 is likely to lead to even more breakthroughs in what it is capable of. Self Driving Cars In October 2020, Waymo began offering fully driverless rides (with no human in the driver seat) on 100% of their rides. At the time of launch, there were 1500 monthly active users and hundreds of cars serving the Phoenix metro area. In December, 2020, General Motors’ Cruise launched fully driverless rides on the streets of San Francisco. Drone Delivery UPS launched 2 drone trials in 2020. One to deliver prescriptions to a retirement community in Florida, and another to deliver medical supplies including Personal Protective Equipment (PPE) between health care facilities in North Carolina. Regulations, safety, noise, and technical challenges will likely continue to mean slow growth for Drone delivery services in 2021, but with continued COVID restrictions that will likely continue off and on through at least June, there has never been a better time to make quick progress on more efficient and contactless delivery. Quantum Computing Researchers in China have reported that they have achieved quantum supremacy that is 10 billion times faster than the quantum supremacy reported by Google last year. Researchers are making rapid progress, but quantum computing still requires extremely expensive hardware, and there are only a small handful of quantum computers in the world that have achieved any kind of quantum superiority. Quantum-resistant cryptography, quantum-assisted cryptography, and quantum computing for machine learning are potential areas of focus where breakthroughs would have a significant industry-spanning, global impact. I believe that one day, the application of quantum computing in the field of AI will propel the technology forward many orders of magnitude — a feat that will have a profound impact on the human race. In my opinion, that is unlikely to happen in the 2020s, but I expect to hear more quantum supremacy announcements in 2021, and perhaps breakthroughs in the variety of algorithms state of the art quantum computers can compute. We may also see more practical quantum-computing APIs services and use-cases. Next Steps Composing Software will teach you the foundations of functional programming in JavaScript. You can get the Composing Software e-book, print edition, or the blog post series that started it all. Learn React, Redux, Next.js, TDD and more on EricElliottJS.com. Access a treasure trove of video lessons and interactive code exercises for members. 1:1 Mentorship is hands down, the best way to learn software development. DevAnywhere.io provides lessons on functional programming, React, Redux, and more, guided by an experienced mentor using detailed curriculum designed by Eric Elliott.
https://medium.com/javascript-scene/top-javascript-frameworks-and-tech-trends-for-2021-d8cb0f7bda69
['Eric Elliott']
2020-12-31 04:48:23.756000+00:00
['AI', 'JavaScript', 'Software Development', 'Technology', 'Crypto']
814
How to get organic spotify promotion for your new music release
How to get organic spotify promotion for your new music release Photo by Wesley Tingey on Unsplash After reading many messages from indie artists, it’s been brought to my attention that finding a Spotify promotion that is organic and not fake bot driven is near impossible today. I have to admit that I’ve searched on Google and all the usual places and done the research just out of curiosity and have found nothing. But some good news is that I work with a Press team and radio station that have had great success with this. I’ve worked with the AVA Live radio team on driving Spotify traffic strategy development for over 10 years in driving Spotify Results. What makes a difference in results? On average, it does depend on the song as some reach 30k in a week, some reach 100k in 4 weeks and just keep growing, others take a lot longer and some just never take flight. Even if your fans like a song, that’s not the deciding factor but it does help to see positive feedback like replays in the first week, requests for more music, and conversations around a single in that first month of release. These are things I watch out for when I’m working on a promotional strategy for any new single. * NOTE: I discuss strategy on Youtube weekly live.. Subscribe and turn on the bell for notifications to join me for those discussions. Timing comes into play along with quality of music, genre and theme. With so many factors involved, there are no guarantees other than we are committed to not giving up and we report all the details with you to constantly adjust the strategy in hopes of reaching that first 100k goal. Do trends influence results? Creating music that rides on music trends is a good strategy but isn’t a necessity. It’s just one strategy that works for DJ creators, electronic music producers and pop artists competing against artists in the top 5. See those artists listed here: Jax Daily December 2020 Music Issue | Best Music and Podcasts of the year How can I get more Spotify plays? If you prioritize Spotify results, you can work in depth with a press team in getting your spotify page traffic, I suggest doing a package that is focused on that goal only. Defining one clear call to action with your team is essential in achieving the best results any one single can achieve. When I work on the promotion of a spotify single, all traffic is designed to get you on a Spotify driven top 20 charts to measure how your music is doing in contrast to others in your area. We have charts for Hiphop, Rock, Songwriters and electronic music.. The 2nd initiative is in building Music Brand Awareness. That means we roll out content based on letting people know about an artist with a new single while focusing on message and the most important factors that validate that artist within a genre or niche group. The final step is to make the artist memorable.. We create an evergreen content strategy targeted at making the artist memorable. This is customized for each individual of course as no two artists are alike. So if you think your current music promotion team is hitting on these marks effectively, then awesome.. I want to know who they are so we can join forces but so far.. all I see are hundreds of websites pushing fake plays. Here’s how to come work with the AVA Live Radio Press Team: Good luck with your music promotions and never give up!! Jacqueline Jax Music Publicist, Radio host and journalist. MORE music promotion articles:
https://medium.com/@jacquelinejax/how-to-get-organic-spotify-promotion-for-your-new-music-release-6bde2ecc590e
['Jacqueline Jax']
2020-12-10 19:22:59.546000+00:00
['Spotify', 'Marketing', 'Social Media', 'Technology', 'Music']
815
Firebase Authentication Setup Guide
Photo by Micah Williams on Unsplash This week will be a quick tutorial on setting up Firebase authentication for your web application in place of authenticating on your own. Our first step is setting up a google account to use firebase if you already have a Gmail then you are already done as you can use that to access and work with Firebase. This part is pretty simple we are going to be creating a project in Firebase to house our backend and setup authentication via email and password. Create a Project 2. Again straightforward we are naming the project it will be referenced by in Firebase. Name your project 3. Here we are selecting if we would like to opt in to use Google analytics. The creation of the project should take a couple of minutes as it will be provisioning the required space as well as a structure for the database. 4. From here we want to click on either the Authentication option on the left side panel or the main screen button. Click Authentication to begin the process 5. Here we will click the edit button on the right of Email/Password to activate authentication with those credentials. Click the pencil icon to the right of Email/Password Hit Enable 6. Here we are going to switch over to our Users tab to create a test user. The User tab is to the left of the tab we were previously in (Sign-In Method) Test user created! 7. Here the user is created and we will need to make note of the User UID as that will be the identifier in most cases in any web application to get content or data specific to the user. Make note of the User UID 8. Now we have to move over to configuration to connect firebase to our web application. The cog icon next to Project Overview in the sidebar will take us to where we need to go. Project Settings! 9. Once you have navigated to the settings page at the bottom right we will be clicking the code icon (web-app icon) to get redirected to the page where we can add our web application to firebase. Bottom Right Code Icon! 10. Fill out the form and add a nickname and make sure you select firebase hosting as this will be used when your application is deployed. Once you hit register firebase wizard will walk you through the next steps but I will be showing you how to do most of this via the console so we can hit next and skip through this for now. Make sure Firebase Hosting is checked 11. From here most of the integration on the Firebase website is complete. Now you will want to head over into your repo and in the console, we will type in and run the below command to install firebase tools. npm install -g firebase-tools 12. After that we will log into firebase from the console similarly to how its done with Heroku or Netlify. firebase login 13. Once logged in via the console run the following command. firebase init Type y when asked for confirmation. You will then be asked what is written below. Which Firebase CLI features do you want to set up for this folder? Press Space to select features, then Enter to confirm your choices. 14. Here we will be selecting Configure and deploy Firebase Hosting sites. ( Use Up/Down arrows to navigate the list. Spacebar to select) Here we want to use an existing project because we are connecting to the project already created on Firebase. Use the project we created on Firebase to continue 15. Now it will prompt what directory to use as public, it is the build directory of your react app by default, so input build and press enter. 16. It should then ask if your application is a SPA( Single Page Application ). 17. With this Firebase is now initialized in our project. From here we just need to configure the APP_KEYS. 18. At this point we will need to add data to a .env file it is a good practice to have both a .env.development and .env.production. So in case of any mishaps you strictly have only been working in the development env. 19. From here we need to move back to the Firebase console to the settings page where we left off. Here we will be grabbing the Configuration under Firebase SDK snippet. This application is already gone! 20. Now we need to move back to our env file and fill out all of the relevant variables that firebase provided us with. Plug in the Firebase variables into the env Note that the REACT_APP_NAME will take whatever value you want to be displayed. As for REACT_APP_DEFAULT_USER_ID we can plug in the User UID we held onto from Firebase previously. 21. Once that information is filled out and in both env files its time to build and deploy the application. npm run build --prod 22. Run the following command to deploy the project. firebase deploy 23. Once the deployment is complete we have one last step to complete and that's creating the real-time database ‘firestore’. Head over to the firebase console again and select the database from the sidebar. Realtime Database 24. Create a database in the region closest to you and for simplicity we will choose test mode. Selecting Test mode was for this guide you can choose locked mode if you plan on deploying immediately 25. From here our firestore database is configured and ready to receive and store data. With this Authentication is set up and integrated and our test user can access it as needed. This guide didn’t have a specific web application in mind but was more focused on utilizing firebase to handle user authentication and login. The framework is fairly straightforward and simple to use. Firebase also comes with many baked-in social logins already available if you plan to authenticate a user via Google/Facebook/Twitter. This was a quick guide on the many utilities that Firebase can provide and get a web application running fairly quick.
https://medium.com/@stevenks17/firebase-authentication-setup-guide-7a266462ee47
['Steve K']
2020-12-13 21:11:03.621000+00:00
['Development', 'Firebase', 'Technology', 'Learning', 'Programming']
816
How To Set Up a Network Bridge for LXD Containers
Most of our web applications run in LXD containers. Not without reason LXD is one of the most important features of Ubuntu Server for me. There are many ways to access a web application in an LXD container from outside. For example, you can use a reverse proxy to control access to the containers. Another possibility is to set up a network bridge so that the containers are in the same network as the container host (the Ubuntu server). In this article I would like to describe how to set up a network bridge for LXD containers. Network Bridge for LXD Containers To set up a network bridge under Ubuntu, you need to install the bridge-utils: $ apt install bridge-utils Then you can set up the network bridge. Ubuntu 16.04 Up to Ubuntu 16.04 Ubuntu uses ifupdown to set network connection settings. The configuration is done in the files under /etc/network/. A simple network bridge — to get the containers into the host network — might look like this: $ cat /etc/network/interfaces # This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5). source /etc/network/interfaces.d/* # The loopback network interface auto lo iface lo inet loopback # The main Bridge auto br0 iface br0 inet dhcp bridge-ifaces enp4s0 bridge-ports enp4s0 up ip link set enp4s0 up # The primary network interface iface enp4s0 inet manual In this example the bridge gets its address from a DHCP server. The real network card enp4s0 is set to manual mode and assigned to the bridge. Ubuntu 18.04 As of Ubuntu 18.04 Netplan is used to configure the network connections. The configuration files can be found under /etc/netplan/. A definition for the bridge could look like this: $ cat /etc/netplan/50-cloud-init.yaml # This file is generated from information provided by # the datasource. Changes to it will not persist across an instance. # To disable cloud-init's network configuration capabilities, write a file # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: ethernets: enp3s0: dhcp4: no version: 2 bridges: br0: dhcp4: no addresses: - 10.10.10.5/24 gateway4: 10.10.10.254 nameservers: addresses: - 10.10.10.254 interfaces: - enp3s0 In the upper part you configure the real network card (enp3s0) and don’t assign an address to it. Then the definition of the network bridge follows. It is set up like a static network connection and also contains the key interfaces. There you define which real network card should be “bridged”. You will find further (more complex) examples of network bridges on the official website. Now the following command applies the changes to the network settings: $ netplan apply --debug Assign Network Bridge Once you have finished setting up the network bridge and it gets the correct IP address, you have to tell the LXD container to get its IP address from the network bridge. This can be done with the following command: $ lxc config device add containername eth0 nictype=bridged parent=br0 name=eth0 With name=eth0 you define under which name the network card can be found in the container. Now you can configure eth0 in the container as you like. From now on the container will get an IP address from the host network. Conclusion You can set up a simple network bridge quit easily and assign it to a container. This allows other users on the network to access a web application without the need to set up a reverse proxy on the container host. More complex scenarios are also possible (VLANs, multiple bridges to get containers into different networks, etc.), but this would go beyond the scope of this short article.
https://medium.com/hackernoon/how-to-set-up-a-network-bridge-for-lxd-containers-98e3e7d1f273
['Open School Solutions']
2019-02-09 17:12:28.600000+00:00
['Linux', 'Open Source', 'Technology', 'Ubuntu', 'Tech']
817
[Live Stream] 2021 Soul Train Music Awards | Full Show On “BET”
[Live Stream] 2021 Soul Train Music Awards | Full Show On “BET” 2021 Soul Train Music Awards, at New York’s Apollo Theater, Sunday Nov 28, 2021 at 20PM << GO LIVE NOW >> ▶▶ https://cutt.ly/8T2eQlA SHOW INFO : Event : 2021 Soul Train Music Awards Date/Time : Sunday Nov 28, 2021 at 20PM Venue : New York’s Apollo Theater, USA Hosted by Tisha Campbell and Tichina Arnold Ashanti will be honored with the Lady of Soul Award, while Maxwell will receive the Legend Award. Song of the Year: Blxst feat. Ty Dolla $Ign & Tyga — “Chosen”; Bruno Mars, Anderson .Paak (Silk Sonic) — “Leave the Door Open”; H.E.R. — “Damage”; Jazmine Sullivan — “Pick Up Your Feelings”; Wizkid feat. Tems — “Essence”; Yung Bleu feat. Drake — “You’re Mines Still” Album of the Year: Blxst — No Love Lost; Doja Cat — Planet Her; Giveon — When It’s All Said and Done… Take Time; H.E.R. — Back of My Mind; Jazmine Sullivan — Heaux Tales; Wizkid — Made in Lagos Video of the Year: Bruno Mars, Anderson .Paak (Silk Sonic) — “Leave the Door Open”; Chris Brown, Young Thug feat. Future, Lil Durk, Latto — “Go Crazy (Remix)”; H.E.R. — “Damage”; Jazmine Sullivan — “Pick Up Your Feelings”; Normani feat. Cardi B — “Wild Side”; Wizkid feat. Tems — “Essence”. ❖ ALL CATEGORY WATCHTED ❖ An action story is similar to adventure, and the protagonist usually takes a risky turn, which leads to desperate scenarios (including explosions, fight scenes, daring escapes, etc.). Action and adventure usually are categorized together (sometimes even while “action-adventure”) because they have much in common, and many stories are categorized as both genres simultaneously (for instance, the James Bond series can be classified as both). Continuing their survival through an age of a Zombie-apocalypse as a makeshift family, Columbus (Jesse Eisenberg), Tallahassee (Woody Harrelson), Wichita (Emma Stone), and Little Rock (Abagail Breslin) have found their balance as a team, settling into the now vacant White House to spend some safe quality time with one another as they figure out their next move. However, spend time at the Presidential residents raise some uncertainty as Columbus proposes to Wichita, which freaks out the independent, lone warrior out, while Little Rock starts to feel the need to be on her own. The women suddenly decide to escape in the middle of the night, leaving the men concerned about Little Rock, who’s quickly joined by Berkley (Avan Jogia), a hitchhiking hippie on his way to place called Babylon, a fortified commune that’s supposed to be safe haven against the zombies of the land. Hitting the road to retrieved their loved one, Tallahassee and Columbus meet Madison (Zoey Deutch), a dim-witted survivor who takes an immediate liking to Columbus, complicating his relationship with Wichita. ✅ ANALYZER GOOD / BAD ✅ To be honest, I didn’t catch Zombieland when it first got released (in theaters) back in 2009. Of course, the movie pre-dated a lot of the pop culture phenomenon of the usage of zombies-esque as the main antagonist (i.e Game of Thrones, The Maze Runner trilogy, The Walking Dead, World War Z, The Last of Us, etc.), but I’ve never been keen on the whole “Zombie” craze as others are. So, despite the comedy talents on the project, I didn’t see Zombieland….until it came to TV a year or so later. Surprisingly, however, I did like it. Naturally, the zombie apocalypse thing was fine (just wasn’t my thing), but I really enjoyed the film’s humor-based comedy throughout much of the feature. With the exception of 2008’s Shaun of the Dead, majority of the past (and future) endeavors of this narrative have always been serious, so it was kind of refreshing to see comedic levity being brought into the mix. Plus, the film’s cast was great, with the four main leads being one of the film’s greatest assets. As mentioned above, Zombieland didn’t make much of a huge splash at the box office, but certainly gained a strong cult following, including myself, in the following years. Flash forward a decade after its release and Zombieland finally got a sequel with Zombieland: Double Tap, the central focus of this review post. Given how the original film ended, it was clear that a sequel to the 2009 movie was indeed possible, but it seemed like it was in no rush as the years kept passing by. So, I was quite surprised to hear that Zombieland was getting a sequel, but also a bit not surprised as well as Hollywood’s recent endeavors have been of the “belated sequels” variety; finding mixed results on each of these projects. I did see the film’s movie trailer, which definitely was what I was looking for in this Zombieland 2 movie, with Eisenberg, Harrelson, Stone, Breslin returning to reprise their respective characters again. I knew I wasn’t expecting anything drastically different from the 2009 movie, so I entered Double Tap with good frame of my mind and somewhat eagerly expecting to catch up with this dysfunctional zombie killing family. Unfortunately, while I did see the movie a week after its release, my review for it fell to the wayside as my life in retail got a hold of me during the holidays as well as being sick for a good week and half after seeing the movie. So, with me still playing “catch up” I finally have the time to share my opinions on Zombieland: Double Tap. And what are they? Well, to be honest, my opinions on the film was good. Despite some problems here and there, Zombieland: Double Tap is definitely a fun sequel that’s worth the decade long wait. It doesn’t “redefine” the Zombie genre interest or outmatch its predecessor, but this next chapter of Zombieland still provides an entertaining entry….and that’s all that matters. Returning to the director’s chair is director Ruben Fleischer, who helmed the first Zombieland movie as well as other film projects such as 30 Minutes or Less, Gangster Squad, and Venom. Thus, given his previous knowledge of shaping the first film, it seems quite suitable (and obvious) for Fleischer to direct this movie and (to that affect), Double Tap succeeds. Of course, with the first film being a “cult classic” of sorts, Fleischer probably knew that it wasn’t going to be easy to replicate the same formula in this sequel, especially since the 10-year gap between the films. Luckily, Fleischer certainly excels in bringing the same type of comedic nuances and cinematic aspects that made the first Zombieland enjoyable to Double Tap; creating a second installment that has plenty of fun and entertainment throughout. A lot of the familiar / likeable aspects of the first film, including the witty banter between four main lead characters, continues to be at the forefront of this sequel; touching upon each character in a amusing way, with plenty of nods and winks to the original 2009 film that’s done skillfully and not so much unnecessarily ham-fisted. Additionally, Fleischer keeps the film running at a brisk pace, with the feature having a runtime of 99 minutes in length (one hour and thirty-nine minutes), which means that the film never feels sluggish (even if it meanders through some secondary story beats / side plot threads), with Fleischer ensuring a companion sequel that leans with plenty of laughter and thrills that are presented snappy way (a sort of “thick and fast” notion). Speaking of which, the comedic aspect of the first Zombieland movie is well-represented in Double Tap, with Fleischer still utilizing its cast (more on that below) in a smart and hilarious by mixing comedic personalities / personas with something as serious / gravitas as fighting endless hordes of zombies every where they go. Basically, if you were a fan of the first Zombieland flick, you’ll definitely find Double Tap to your liking. In terms of production quality, Double Tap is a good feature. Granted, much like the last film, I knew that the overall setting and background layouts weren’t going to be something elaborate and / or expansive. Thus, my opinion of this subject of the movie’s technical presentation isn’t that critical. Taking that into account, Double Tap does (at least) does have that standard “post-apocalyptic” setting of an abandoned building, cityscapes, and roads throughout the feature; littered with unmanned vehicles and rubbish. It certainly has that “look and feel” of the post-zombie world, so Double Tap’s visual aesthetics gets a solid industry standard in my book. Thus, a lot of the other areas that I usually mentioned (i.e set decorations, costumes, cinematography, etc.) fit into that same category as meeting the standards for a 202 movie. Thus, as a whole, the movie’s background nuances and presentation is good, but nothing grand as I didn’t expect to be “wowed” over it. So, it sort of breaks even. This also extends to the film’s score, which was done by David Sardy, which provides a good musical composition for the feature’s various scenes as well as a musical song selection thrown into the mix; interjecting the various zombie and humor bits equally well. There are some problems that are bit glaring that Double Tap, while effectively fun and entertaining, can’t overcome, which hinders the film from overtaking its predecessor. Perhaps one of the most notable criticism that the movie can’t get right is the narrative being told. Of course, the narrative in the first Zombieland wasn’t exactly the best, but still combined zombie-killing action with its combination of group dynamics between its lead characters. Double Tap, however, is fun, but messy at the same time; creating a frustrating narrative that sounds good on paper, but thinly written when executed. Thus, problem lies within the movie’s script, which was penned by Dave Callaham, Rhett Reese, and Paul Wernick, which is a bit thinly sketched in certain areas of the story, including a side-story involving Tallahassee wanting to head to Graceland, which involves some of the movie’s new supporting characters. It’s fun sequence of events that follows, but adds little to the main narrative and ultimately could’ve been cut completely. Thus, I kind of wanted see Double Tap have more a substance within its narrative. Heck, they even had a decade long gap to come up with a new yarn to spin for this sequel…and it looks like they came up a bit shorter than expected. Another point of criticism that I have about this is that there aren’t enough zombie action bits as there were in the first Zombieland movie. Much like the Walking Dead series as become, Double Tap seems more focused on its characters (and the dynamics that they share with each other) rather than the group facing the sparse groupings of mindless zombies. However, that was some of the fun of the first movie and Double Tap takes away that element. Yes, there are zombies in the movie and the gang is ready to take care of them (in gruesome fashion), but these mindless beings sort take a back seat for much of the film, with the script and Fleischer seemed more focused on showcasing witty banter between Columbus, Tallahassee, Wichita, and Little Rock. Of course, the ending climatic piece in the third act gives us the best zombie action scenes of the feature, but it feels a bit “too little, too late” in my opinion. To be honest, this big sequence is a little manufactured and not as fun and unique as the final battle scene in the first film. I know that sounds a bit contrive and weird, but, while the third act big fight seems more polished and staged well, it sort of feels more restricted and doesn’t flow cohesively with the rest of the film’s flow (in matter of speaking). What’s certainly elevates these points of criticism is the film’s cast, with the main quartet lead acting talents returning to reprise their roles in Double Tap, which is absolutely the “hands down” best part of this sequel. Naturally, I’m talking about the talents of Jessie Eisenberg, Woody Harrelson, Emma Stone and Abigail Breslin in their respective roles Zombieland character roles of Columbus, Tallahassee, Wichita, and Little Rock. Of the four, Harrelson, known for his roles in Cheers, True Detective, and War for the Planet of the Apes, shines as the brightest in the movie, with dialogue lines of Tallahassee proving to be the most hilarious comedy stuff on the sequel. Harrelson certainly knows how to lay it on “thick and fast” with the character and the s**t he says in the movie is definitely funny (regardless if the joke is slightly or dated). Behind him, Eisenberg, known for his roles in The Art of Self-Defense, The Social Network, and Batman v Superman: Dawn of Justice, is somewhere in the middle of pack, but still continues to act as the somewhat main protagonist of the feature, including being a narrator for us (the viewers) in this post-zombie apocalypse world. Of course, Eisenberg’s nervous voice and twitchy body movements certainly help the character of Columbus to be likeable and does have a few comedic timing / bits with each of co-stars. Stone, known for her roles in The Help, Superbad, and La La Land, and Breslin, known for her roles in Signs, Little Miss Sunshine, and Definitely, Maybe, round out the quartet; providing some more grown-up / mature character of the group, with Wichita and Little Rock trying to find their place in the world and how they must deal with some of the party members on a personal level. Collectively, these four are what certainly the first movie fun and hilarious and their overall camaraderie / screen-presence with each other hasn’t diminished in the decade long absence. To be it simply, these four are simply riot in the Zombieland and are again in Double Tap. With the movie keeping the focus on the main quartet of lead Zombieland characters, the one newcomer that certainly takes the spotlight is actress Zoey Deutch, who plays the character of Madison, a dim-witted blonde who joins the group and takes a liking to Columbus. Known for her roles in Before I Fall, The Politician, and Set It Up, Deutch is a somewhat “breath of fresh air” by acting as the tagalong team member to the quartet in a humorous way. Though there isn’t much insight or depth to the character of Madison, Deutch’s ditzy / air-head portrayal of her is quite hilarious and is fun when she’s making comments to Harrelson’s Tallahassee (again, he’s just a riot in the movie). The rest of the cast, including actor Avan Jogia (Now Apocalypse and Shaft) as Berkeley, a pacifist hippie that quickly befriends Little Rock on her journey, actress Rosario Dawson (Rent and Sin City) as Nevada, the owner of a Elvis-themed motel who Tallahassee quickly takes a shine to, and actors Luke Wilson (Legally Blonde and Old School) and Thomas Middleditch (Silicon Valley and Captain Underpants: The First Epic Movie) as Albuquerque and Flagstaff, two traveling zombie-killing partners that are mimic reflections of Tallahassee and Columbus, are in minor supporting roles in Double Tap. While all of these acting talents are good and definitely bring a certain humorous quality to their characters, the characters themselves could’ve been easily expanded upon, with many just being thinly written caricatures. Of course, the movie focuses heavily on the Zombieland quartet (and newcomer Madison), but I wished that these characters could’ve been fleshed out a bit. Lastly, be sure to still around for the film’s ending credits, with Double Tap offering up two Easter Eggs scenes (one mid-credits and one post-credit scenes). While I won’t spoil them, I do have mention that they are pretty hilarious. ✅ FINAL THOUGHTS ✅ It’s been awhile, but the Zombieland gang is back and are ready to hit the road once again in the movie Zombieland: Double Tap. Director Reuben Fleischer’s latest film sees the return the dysfunctional zombie-killing makeshift family of survivors for another round of bickering, banting, and trying to find their way in a post-apocalyptic world. While the movie’s narrative is a bit messy and could’ve been refined in the storyboarding process as well as having a bit more zombie action, the rest of the feature provides to be a fun endeavor, especially with Fleischer returning to direct the project, the snappy / witty banter amongst its characters, a breezy runtime, and the four lead returning acting talents. Personally, I liked this movie. I definitely found it to my liking as I laugh many times throughout the movie, with the main principal cast lending their screen presence in this post-apocalyptic zombie movie. Thus, my recommendation for this movie is favorable “recommended” as I’m sure it will please many fans of the first movie as well as to the uninitiated (the film is quite easy to follow for newcomers). While the movie doesn’t redefine what was previous done back in 2009, Zombieland: Double Tap still provides a riot of laughs with this make-shift quartet of zombie survivors; giving us give us (the viewers) fun and entertaining companion sequel to the original feature.
https://medium.com/@2021_Soul-Train-Music-Awards/live-stream-2021-soul-train-music-awards-full-show-on-bet-7e7167c0c89e
['Soul Train Music Awards', 'Full Show']
2021-11-28 11:52:45.133000+00:00
['Technology', 'Bussiness', 'Music', 'Festivals', 'Awards']
818
OLPortal — Getting Ease in Decentralized Messenger Might be Easier Than You Think
OLPortal — Getting Ease in Decentralized Messenger Might be Easier Than You Think Ilmizer Apr 23, 2020·7 min read The 1st in the World Decentralized Neural Messenger with AI The development of technology is growing so fast, providing various facilities for humans in carrying out their daily lives. Many new discoveries that previously looked like nonsense and delusion as in science fiction but now can be realized. Like mobile phones with touch screens or tablets with large screens but have a thin body. Innovations in this technology make human activities easier and more efficient in all aspects of life. One technology development that is very popular today is AI (Artificial Intelligence). The most prominent AI function is in the form of automation of various service functions. Ai illustration (source: pixabay) AI has been used in our daily lives, for example, such as Google Assistant, Siri on Apple, Self-driving cars and online shops that we often visit will provide recommendations items that are also the ability of AI. Look at how many benefits of AI to support our lives. AI can be used in almost all fields of modern life. This is a good opportunity to develop unique niches to solve various types of problems. One project that focuses on this is OLPORTAL. OLPORTAL mission How does OLPORTAL help the main problem in AI? There are some of the most common problems that must be resolved if a project wants to make a quality AI product. These problems include: Poor privacy. The lack of availability of quality data sets on a centralized system cannot overcome a large user base. The centralized system makes user data including sensitive and confidential information almost available to the host server, of course, this is not safe. Lack of trained experts. Complex processes are often not understood by everyone, so experts are needed. Only a few people are really able to design such systems to be easy to understand and capable of modules that are easy to use are very rare. Lack of integration of AI, business and marketing. Lack of communication makes the service not optimal, this can be detrimental to service providers and clients. So that the integration between Ai, business, and marketing has a very big influence. OLPORTAL provides a safe and reliable decentralized platform. The market will be more attractive because clients of a different individual or company shareholders get access to various AI technologies. OLPORTAL also provides the best data protection because of its decentralized system. User data at the time of registration is recorded in the blockchain so it cannot be accessed and cannot be tracked by the owner on the system. In addition, many data labelling functions are served by the users themselves, this makes it easy for the system to improve its performance after the technology is installed on the platform by the user after training. All of these features can overcome the problem of data quality and scarcity of human resources in AI through a cooperative model. How does OLPORTAL improve communication efficiency? OLPORTAL increases communication efficiency in three ways: Increase message composition speed; Improve user ability; Earnings when communicating. These three ways can maximize the function of communication. For millennials, doing one job that can provide many functions becomes something that is very much needed. That’s why OLPORTAL also sets target-bots that bring possible income from collaboration with OLAI. One of the advantages of this project is the decentralized system. Users will get complete anonymity. All personal information belongs to the user entirely, which is why user privacy is safe. This decentralized system will increase the attractiveness of commercial bots, all data belonging to its users, creators are entitled to intellectual/digital property, there is freedom of expression. OLPORTAL develops unique innovative technologies that enable the creation of AI products right in your messenger’s account. OLPORTAL is a decentralized messenger on neural networks with AI dialogue functions. This allows you to compose messages automatically. Therefore the ecosystem is developed by utilizing OLAI neurobots with unique personalities and functions. The flagship structure of the OLPORTAL project
https://medium.com/@ilmizer/olportal-getting-ease-in-decentralized-messenger-might-be-easier-than-you-think-4eaef808ad0
[]
2020-04-28 04:07:38.640000+00:00
['Neurobot', 'Decentralized', 'Artificialintelligence', 'Chatbot', 'Technology']
819
The Fall of Mammoths
There are micro firms, there are medium firms, and then there are big firms. But some firms go beyond and become mammoths. These are companies whose revenues are on a scale that could be equated to a small country’s GDP. These companies hire the best designers, smartest engineers, leading marketers, and top-tier managers. But sometimes, they still don’t succeed. And the bigger they are, the harder they fall. Let’s talk about some mammoths of the smartphone industry who, despite their size, resources, expertise, and experiences, just couldn’t hit the mark. These tech giants made errors in judgment, big enough to prove fatal. In fact, what these companies did wrong is actually quite interesting. Let’s jump in. 1. Not Taking a Hint from Time: Nokia Let’s rewind to the 2000s. If someone had a phone, there was a good chance that it was a Nokia. But we all know where Nokia stands now — almost nowhere. How did this happen? In the old days, phones could be anything — some of them would flip, some of them would slide, and there were all sorts of screen types and sizes. People used to choose a phone almost purely based on its hardware. Maybe they liked the unique color or maybe the keys felt nice. In this battle, Nokia was outstanding. Their phones were affordable and had the kind of durability that people talk about even now. So when and where did things go wrong? In 2007. The iPhone was launched and the battle quickly shifted from being about hardware to being about software. All the things that we talked about above, faded away in importance. Now everyone had their eyes on just one thing — having access to the best apps and that was it. Nokia didn’t actually do anything wrong. They just did not act. They kept releasing their trusty old phones for a few years and their market share just evaporated. They severely underestimated the iPhone and the impact it would have. Not adapting early enough proved fatal for a company that was known to have innovation and marketing wizards. 2. Failing to Price Right at the Right Time: Microsoft In 2017, Microsoft announced that their Windows phone operating system was officially dead. But, those of us who’ve followed the smartphone market for quite some time, know that Windows phone died a long time before that. When it came out, the Windows phone was so promising. You got a radically different aesthetic to anything out there at the time. It was faster and felt much lighter than the bloated Android phones we were getting. The brand name ‘Windows’ alone carries enormous power when it comes to operating systems, so naturally, their product was all the rage. It could genuinely have been the third player that we always wanted in the iOS vs. Android battle. But if you’ve used a Windows phone, you’ll know that there was a woeful selection of apps to choose from. Users were missing out on so much. For the first three years, even the Instagram app was unavailable. At the same time, there were so many good options in the market at affordable prices. Windows had to bring something equivalent to the table. What they should have done was subsidize Windows phones. You can’t just drop a new operating system and expect it to beat the operating systems that have been going in the market for 3 years already. But by pricing more competitively, they could have had a chance. We know they didn’t do this. So, the Windows phone operating system was dead within 2 years. But Microsoft spent the next five years flogging a dead horse and it was painful to watch. 3. A Whole Series of Lessons to be Learnt: Sony Sony used to be one of the biggest smartphone companies. Now they sell only a tenth of the phones that they used to sell in 2014. And there are lots of reasons behind this, but mostly just a combination of poor decisions on top of really strange ideas. The Naming Conundrum The way Sony names its smartphones is borderline criminal. They went from Xperia X10 to Xperia S to Xperia T to Xperia Z. It sounds like they were just picking random letters from the alphabet. but it gets so much worse. They then went on to the names Xperia Z1, Z2, Z3, Z3+, Z5, and just when it looked like they might have figured out what they were doing, they hit us with Xperia XZ. It’s almost as if Sony was playing with its customers. After continuing the XZ series to XZ2 and XZ3, they hit us with their recently launched smartphone, which is named, no you did not guess it right, Sony Xperia 1. But what Sony did after this is almost unimaginable. They named their next phone Sony Xperia 1 (2). They are just playing with us at this point. Not Playing to Strengths Sony phones had good camera hardware but they did not take the best photos or videos. When you factor in that Sony is one of the few smartphone makers that also make cinema-quality cameras, it was shocking to see that their smartphone cameras were not winners. It’s funny. If anyone should have the best smartphone cameras, it’s Sony. And that was all they needed to do, but didn’t. Another reason why Sony smartphones just fell into irrelevance was that every phone launched for the next 4 years after Xperia Z, looked exactly the same. You just can’t do this in a market where bezels are shrinking every year and some people out there are bending screens. Inadequate PR Sony’s problems extended way beyond the phones themselves. Even the company’s PR strategy wasn’t very good. Take the Xperia 1 for instance. The phone was announced in 2019. It was sent to popular YouTube reviewers 6 months later. And the phone was only lent to the YouTubers for a week. Ouch. Compare this to OnePlus. They send out their phones 2 weeks before the launch. They also give send ‘guides’ along with the phones, explaining all the new features. And they let the reviewers keep the phones as long as they need, to test. This makes all the difference because if reviewers themselves are not happily willing to review your devices, how can you expect an efficient promotion?
https://medium.com/skynox/the-fall-of-mammoths-8509cee89924
['Tanmaiy Bhateja']
2020-08-15 11:57:33.562000+00:00
['Technology', 'Innovation', 'Smartphones', 'Product Development', 'Design']
820
Green Spiky Nova Pulses - Sound and Design - Medium
VJ Loop |Green Spiky Nova Pulses at VJ Loop Zone A continuous supernova of green spikes and yellow-red blast radius, with psychedelic overtones and mandala undertones. Could also be thought of as a flowering cosmic cactus (maybe). Seamless loop. Media Format Info: Quicktime Movie, MPEG-4, MJPEG codec (Photo JPEG export), Progressive Scan, 60 FPS, 16:9 aspect ratio, 1280 x 720 (720P), 417 MB file size, 33s duration. Available at VJ Loop Zone
https://medium.com/sound-and-design/vj-loop-rygnovae1-1add3dbfa648
["Michael 'Myk Eff' Filimowicz"]
2021-01-02 04:11:13.785000+00:00
['Flair', 'Design', 'Art', 'Technology', 'Creativity']
821
Use a Different Volume For Your Docker Images in Ubuntu
Use a Different Volume For Your Docker Images in Ubuntu Increase your machine’s docker image capacity Just as a warning, I take no responsibility for any damage or issues on your machine that you believe to have been caused by following the steps of this guide! Here’s a quick guide I’ve written to expand your current docker image capacity on your machine, assuming you have another disk/volume you want to move your docker images too — or if you just quite like the idea of having them stored separately from your main volume. My laptop has a very small SDD, which I was constantly hitting the limits of while doing docker based development— so I decided to move my docker installation to my much larger HDD. I wrote this guide since I thought it might be useful for others. I’ve written it assuming an Ubuntu operating system, but it may be applicable to others, I just haven’t tried it anywhere else. Sudo Up You’ll be making some changes to some of the more hairier and riskier parts of your file system, so you’ll need to elevate your permissions: sudo su Note: The remainder of this guide will assume you have elevated your privileges. Finding Your Device Assuming you have an existing drive or a brand new drive you’ve just plugged into your machine, you can find your drive and the name of the device using Ubuntu’s Disks utility. The drive I’ll be using for my docker installation is the 1.0TB Hard Disk and the partition is the already recently reformatted /dev/sda2 device. Note down whatever your device is called, you’ll need it for the next steps. Note: Your Docker installation may not work out of the box with an NTFS filesystem and require further configuration (and trial and error). If your new drive is currently formatted under NTFS, I’d recommend that you reformat the device (if possible) to use a Linux filesystem such as ext4 . The Steps Assuming /dev/sda2 is your new disk — the following steps will stop your docker daemon, move your current docker directory, mount your new file system and then reinstate your docker installation on the new file system. systemctl stop docker mv /var/lib/docker /var/lib/docker-backup mount /dev/sda2 /var/lib/docker cp -rf /var/lib/docker-backup/* /var/lib/docker systemctl start docker Test out your newly expanded capacity by pulling a new docker image and running it — here’s a postgres image as an example: docker pull postgres && docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres If all goes well, the image will pull and run: Using default tag: latest latest: Pulling from library/postgres 27833a3ba0a5: Pull complete ed00742830a6: Pull complete dc611c2aceba: Pull complete a61becab5279: Pull complete 8dcff41e7aea: Pull complete 820bf1bbf0d7: Pull complete 050804429905: Pull complete 782c81275334: Pull complete bfb4aaa36ad6: Pull complete 9101c497b579: Pull complete 746ef6cad24f: Pull complete f3d6bb76fd3b: Pull complete 32cf0a104c6f: Pull complete de900772b4f7: Pull complete Digest: sha256:ed0bee606e90ed40cf5ffcfafc3d27c53d7132b7950324598ac06cba75f3e4cb Status: Downloaded newer image for postgres:latest 33a14d04702b01ba3a7470a633d8eda71352602eba617afc8fa60e6307ebc7bf Try a few other things with your docker installation to confirm you’re happy with it. Mount On Startup Assuming everything is now fine, you’ll want to ensure your disk gets mounted on the /var/lib/docker directory permanently on start-up: nano /etc/fstab Add the following line at the end of the file, again assuming /dev/sda2 is your device and also assuming is formatted as an ext4 filesystem. /dev/sda2 /var/lib/docker ext4 defaults 0 1 Note: If you’re unsure of the formatting of your filesystem you can confirm it on the Ubuntu Disks utility. Rollback If something goes wrong or for whatever reason, you want to return to your previous setup — you should be able to rollback in the following way: systemctl stop docker umount /dev/sda2 mv /var/lib/docker-backup /var/lib/docker # Remove the additional line added to /etc/fstab (if applicable) systemctl start docker Cleanup If you’re happy with the increased capacity in your docker installation and all has gone to plan, you can now remove your backup: rm -rf /var/lib/docker-backup That’s It! Thanks for reading, I hope you found it useful!
https://medium.com/clusterfk/use-a-different-volume-for-your-docker-images-in-ubuntu-4c0315be6d66
['Andy Macdonald']
2019-05-04 21:17:42.044000+00:00
['DevOps', 'Software Dev', 'Ubuntu', 'Docker', 'Technology']
822
Astroscale ships its area junk elimination demonstration satellite tv for pc for March 2021 venture
Japanese startup Astroscale has shipped its ELSA-d spacecraft to the Baikonur Cosmo drome in Kazahkstan, where it is going to be included with a Soyuz rocket for a launch scheduled for March of next year. This is a critical assignment for Astroscale, since it’ll be the primary in-space demonstration of the enterprise’s technology for de-orbiting space debris, a cornerstone of its proposed area sustainability carrier enterprise. The ELSA-d undertaking by Astroscale is a small satellite project a good way to reveal two key technologies that permit the enterprise’s imaginative and prescient for orbital debris elimination. First can be a concentrated on component, demonstrating an capability to locate and dock with a bit of space particles, the use of positioning sensors such as GPS and laser finding technologies. That might be used by a so-called ‘servicer’ satellite tv for pc to find and attach to a ‘target’ satellite released on the identical time, with the intention to stand in for a capability piece of particles. Astroscale intends to dock and launch with the ‘target’ the usage of its ‘servicer’ multiple instances over the path of the assignment, showing that it can become aware of and seize out of control items in area, and that it may manoeuvre them for managed de-orbit. This will essentially prove out the feasibility of the generation underlying its commercial enterprise model, and set it up for future business operations. In October, Astroscale introduced that it had raised $fifty one million, making its general raised to date $191 million. The company additionally acquired the body of workers and IP of a corporation called Effective Space Solutions in June, which it’s going to use to build out the geostationary servicing arm of its enterprise, further to the LEO operations that ELSA-d will reveal. Read More…..
https://medium.com/@samirmalik8805/astroscale-ships-its-area-junk-elimination-demonstration-satellite-tv-for-pc-for-march-2021-venture-16dd7c510acc
[]
2020-12-23 07:00:01.310000+00:00
['Technology News', 'Satellite Technology', 'Space', 'Technews', 'Technology']
823
BitClave Weekly Update — Apr 9, 2018
Development MatchICO Major update on MatchICO — you asked us, and we did it! We launched registration via email. Now the access to MatchICO is easier and more affordable. Sign up via Facebook is also available. Also, we integrated ID verification form. It’s very important for ICO companies to be sure of their investor’s uniqueness. Users with verified ID get more valuable offers. Desearch Desearch is under very active development. We didn’t have any visible improvements last week. But we made a lot of works on our search algorithm. You’ll see significant changes this week. Feel free to share your ideas or opinion about the product here https://bitclave.typeform.com/to/DHc6ik. Platform Last week we finished APIs related to managing ETH crypto wallets in BASE and started the work on a service that will compute user’s wealth in ETH/ERC20 crypto. We also were working on defining details of integration of Base Login functionality and wealth data into Desearch. This week we plan to finish the service for wealth computation and add authorization via login/password to Base Login Marketing We have been collecting feedback from the community on different forums for Desearch.com, the decentralized search engine for crypto-focused content that we launched last week. We have been working on the updates as we write this post. (Shout-out to all those who shared feedback!) We were also recently featured in ‘Why the net giants are worried about the Web 3.0’ Medium post by Matteo Gianpietro Zago. We also published the blog post titled ‘Technology Giants Join Hands Against Cryptocurrency’, which will give you an idea of how the big tech companies are going against the blockchain companies in general. Our branding team has started working on the setup of brand guidelines for our new product — Desearch. We will be rolling out the updates in coming weeks. As a reminder, please follow Desearch on Twitter and Telegram for all latest updates. And feel free to apply to join our Marketing team here or write to us at [email protected] Events/meetings Our Head of Growth, Pratik Gandhi and Event Manager, Stanislav Liutenko were at the Blockchain and Bitcoin Conference in Berlin, Germany on April 4th, 2018. Also, Pratik spoke about the future of Internet at the C3 Crypto Conference in Berlin. Stanislav, our Event Manager, accompanied him for this conference as well. The C3 Crypto Conference was held at STATION-Berlin on April 5th and 6th. Follow for all the updates for this event on Twitter with the hashtag — #C3CryptoConference
https://medium.com/bitclave/bitclave-weekly-update-apr-9-2018-d3107c9b25e0
[]
2018-04-09 20:05:10.739000+00:00
['Technology', 'Decentralization', 'Blockchain', 'Conference', 'Bitcoin']
824
Thinking Of Upgrading Your Devices? Ask Yourself These 3 Questions First.
Can I afford it? This is a bar you should be setting for yourself whenever you are about to make a ‘non-essential’ purchase. If you’re in credit card debt or living paycheck to paycheck it’s probably not the best time to spend $1000+ on a new phone. Your phone upgrade is not an essential — getting out of debt and paying off your credit card bill and starting a retirement account — those things are. So, if you’re not doing great financially maybe suck it up for a couple more years until you get yourself in a better financial standing. How is my current device holding up? If you do have the money, the next question to ask is ‘How is my current device holding up?’. There is a time/frustration correlation — the amount of time you can deal with a slow functioning, highly frustrating device before your reach a breaking point. At this point you can consider upgrading. In many cases you can save money by fixing small things like swapping the battery or broken screen or replacing the foam on your earphones instead of buying another pair completely. Is the difference worth it? When considering an upgrade it’s good practice to think — are the features worth it and is there an actual difference between one phone and the next. In many cases from year to year that might not really be the case. Over the course of about 3 year, you can start to notice some significant improvements. Two major factors to look for are the screen and the camera. Face detection and thumbprints might not be such important components. We all have different factors we look for depending on how we use our phones.
https://medium.com/@eric.r.kert/thinking-of-upgrading-your-devices-ask-yourself-these-3-questions-first-7ab1dfe98461
['Eric Kert']
2019-02-13 00:33:34.950000+00:00
['Devices', 'Saving Money', 'Minimalism', 'Money', 'Technology']
825
MinexPay Report | Week 3: Acceleration engaged!
Discount program: cashback Earlier in the second report, we told you about an early bird discount. As promised, we will proceed and disclose the full details regarding the bonuses we have prepared for you! For a limited period of time, our early customers will enjoy a cashback on the card’s purchase of up to 20%, payable in MNX upon card activation. Indeed, once your card is activated, your account will be credited of that amount! For example, if you paid 100 MNX for an Infinite card, you will get 20 bonus MNX added to your account. Superior Bonus than Minexbank returns We would like to emphasize the fact that this cashback offers a superior return than keeping your coins in the MinexBank. This means that instead of leaving your MNX in the bank, you’d better pay early for your card and get that 20% bonus! Are you ready to order your card AND make a superior return than Minexbank can offer at the same time? Follow this link and finalize your order: MinexPay Login and be sure to finish this step in order to proceed with KYC verification (which will take place in the second part of July).
https://medium.com/minecoin-blog/minexpay-report-week-3-acceleration-engaged-e6c5b85e0846
['Minexcoin']
2018-10-31 15:38:55.538000+00:00
['Technology', 'Blockchain', 'Finance', 'Bitcoin', 'Credit Cards']
826
On Premise VS Cloud: Time for you to make the transition with these simple tips
It may have taken some by surprise when Atlassian announced that it would be sunsetting its on-premise version of its JIRA Software by ceasing server product sales in February of 2021 and ending server support in February of 2024. But those with an eye towards innovation saw a movement towards the cloud coming for quite some time now, and Atlassian is prepared to shed those who aren’t prepared to follow. Before we jump into the benefits of moving your project management to the cloud, let’s first understand why you’re likely currently hosting your tool locally. On Premise Incentives There are up to three key reasons your company likely operates with an on premise solution: Security Ownership and Control Infrastructure Security Generally the chief concern with cloud hosting, on premise solutions keep security squarely in your control as there’s no third party intermediary between your employees and the software. Ownership and Control A one-time purchase of software licenses allows your organization to maximize the value of the purchase by supporting the software on your own arrangement. Additionally, the configuration, data, and system updates are entirely in your control. Infrastructure That you’re hosting your own project management software on premise means you organization has already engaged in the up front capital expenditure as well as the necessary hires to implement and maintain the system. To be frank, it’s quite possible that this capital commitment came before the widespread availability (and trust in) the cloud. Transition Trepidation Moving to the cloud means your organization will need to loosen its own grasp of the Security, Ownership, and Infrastructure that currently keeps your project management (and likely other company-wide softwares) afloat. Such a transition can be costly and may call the necessity of certain positions of your organization into question once the transition has concluded. So then the big question persists: Why make the leap to the cloud from your on premise solution? Truth is, many of the reasons that your organization cited as reasons to be on premise are addressed with grace through a cloud solution. The Benefits of the Cloud Adopting the Cloud and the providers who utilize it is admittedly an extension of trust in other organizations, but this trust is often warranted. A third party’s ability to optimize their service for a segment of your business generally improves the quality and lowers the cost of that service. Security Cloud Security is increasingly becoming safer than on premise security as cloud providers routinely invest heavily in the latest security measures to ward off a wide range of threats. It’s unlikely that the average organization will have the bandwidth to invest in on premise hardware for security measures at the rate of, say, AWS, so trusting cloud providers with modernized security is becoming less of a leap of faith and more of a reasonable expectation. This is just one of many instances where you can limit expenditure and strain on internal IT positions and infrastructures. Ownership and Control Cloud-based softwares are far more accessible than ones deployed on premise. Additionally, the burden to maintain, update, and optimize performance for scalability is also shifted to the software provider, allowing you to engage with an ideal experience of the program without needing to lean on your IT infrastructure to perfect the solution internally. Infrastructure There’s no dodging the fact that your organization has made a considerable investment in its on premise hosting solution, but a sunk cost is a dangerous fallacy when trying to estimate innovation for the future. Cloud-based softwares come with predictable costs in the form of an operational expenditure. The responsibilities of uptime, energy costs, disaster recovery are mantled by the provider as part of the arrangement as well. In this way, you’re safe to assume that cloud-hosted solutions shake out to being less expensive over time as essentially everything required to rollout, maintain, and update the solution is no longer your burden to shoulder. An Opportunity for Innovation If your organization is looking to move its project management solution from on premise to the cloud, it’s also likely a good time to assess and reconsider the management tool your company is employing to spearhead planning and execution efforts. Let’s identify three core ways to evaluate a software the same way we did to evaluate the benefits of the cloud: Ease of use Flexibility Scalability Ease of Use Organizational buy-in on a tool is tremendously important, as it ultimately determines if your company is maximizing the value of the solution they’ve selected. An easy to use solution helps bridge the gap between your employees using a tool as part of an organizational mandate and using a tool because it actually facilitates reaching objectives by making daily workflows easier. Modern PM solutions incentivize usage by weaving more instances of collaboration into them so that they become live working environments instead of “update hubs” that simply track what’s already been done. Flexibility The more organizational workflows that a solution can gracefully solve, the better, as it comes with a bevy of benefits. First, this will require your team to juggle fewer subscription engagements across fewer vendors. Secondly, cross-departmental workflows are facilitated as they do not require integrations between tools, or even worse, moving data outside of these tools altogether. The drawback to consider with all-in-one flexible workspaces is that drawing meaningful insights across the platform can be a challenge, which lends to our third point. Scalability It’s imperative that the selected solution fits your organization both today and tomorrow, meaning it must have the flexibility to cater to many workflows while still scaling in an such a way that does not result in an organizational nightmare. Striking a balance between flexibility and scalability is arguably the key challenge that separates the top-tier workflow solutions from ones that thrive in the hands of a certain department. Consider how your management may glean meaningful insights across two or more departments with entirely different workflows to determine a solution’s cross-departmental scalability. Final Thoughts Moving your organization to the cloud is no small feat, but it’s a vital transformation that is beginning to look like a “when, not if” transition. What feels like a leap of faith is coming more of an acknowledgement of the ability for cloud vendors to optimize their service more effectively than your organization can internally. The same can be said about workflow solutions. What feels like a stretch in unifying more of your organization around fewer solutions will eventually be seen as an undeniable leap towards scalable productivity instead of forcing too many square pegs into round holes. That said, selecting the best solution as part of your digital transformation is a major crux in the success in the future of an organization’s workflows.
https://blog.niftypm.com/its-time-to-bring-your-project-management-from-on-premise-to-the-cloud-c1388f660161
[]
2020-12-20 02:33:12.208000+00:00
['Enterprise Technology', 'On Premise', 'Project Management', 'Enterprise', 'Enterprise Software']
827
Redefining Beauty
Redefining Beauty As they say “beauty lies in the eyes of the beholder”. This is what brings pleasure to the senses including the mind and spirit. Society has put so much emphasis on such traits, be it the color of our skin or the size of our body. Fuller lips, luscious long hair, prominent cheekbones, soft skin have all collectively become the deciding factor to being deemed beautiful. Growing up we came across models and movie stars who have embodied the so-called “perfect” woman. There is a set mindset of trying to reach up to these unreasonable versions set by the so-called intellectual strata of the society. Having someone decide whether you are beautiful or ugly, fat or thin has become a way of living that we all succumb to. Throughout our life, we question ourselves as to where we lie in the spectra of beauty, and in doing this, there are many whose self-esteem plummets with the definitions and views of someone else. Teenagers and young adults go through rigorous, harmful diets and exercises to accomplish this version of beauty. Women have been facing constant backlash for many years for not reaching the measure of acceptance deemed by society. Representation of different races, body types, and the numerous demarcations of women is finding a new voice in the coming future. We realize the importance of empowering ourselves and all generations to come, they should not be chained by this unrealistic interpretation of beauty. Redefining beauty In today’s world, we are facing a massive change and slowly breaking the barriers when it comes to defining the word beauty. Female actors with dusky skin and unconventional body types are being represented because of their talent and not just as a pretty face on the screen. Advertisements are featuring a larger representation of women who could help with the idea of seeing beauty in everyone. Many women faced the odds of having acid thrown on them only to show us the strength and beauty within. Seeing this shift in the culture of numerous communities has led many to feel proud and confident in the way they are as well as the way they perceive others. The gateways of beauty have finally opened. Beauty to someone could be their rendition of the elegance of Manushi Chhillar or the strength of Mary Kom, the bravery of Malala Yousafzai, or even the mind of Shakuntala Devi. Today we are living in an expansive world where we can see diversity in the newfound meaning of beauty. Beauty is more of a feeling that you invoke in someone, an aura you project, and the personality you develop. It all culminates together making you beautiful both internally and externally. One should never let themselves conform to what others believe is beautiful and instead strive to realize the beauty in themselves as well as others all around.
https://medium.com/ieee-women-in-engineering-vit/redefining-beauty-3d4954a9e802
['Disha Paul']
2020-12-22 06:24:12.713000+00:00
['Technology', 'Women', 'Beauty', 'Women In Tech']
828
Netflix vs Deepfake: The Irishman
How is De-aging Technology Works? According to the film’s VFX supervisor, Pablo Helman, who stated that 1,750 shots were required for two and a half hours of shooting, the carefully placed on-set lighting captured the actor's facial performances from different angles, while at the same time shining infrared light on the actors’ faces without being seen on the production camera. Thus, the system was able to analyze the lighting and texture information and created a machinable geometry network for each frame. Working with multiple cameras is indispensable in the process. While shooting, they work with a three-camera rig with a central camera, which also has a director’s camera. The other two cameras are there to record data. Why? “Because we don’t have markers on their faces, the more data we have the better the chance to recreate performance. So the software will keep an eye on these three cameras and the information from them” Helman explains. So more data means a more realistic and accurate image. De-aging technology is not a technique Hollywood is unfamiliar with. We know that it was used in many films before The Irishman (2019), such as The Curious Case of Benjamin Button (2008), Captain America: Civil War (2016), and Blade Runner 2049 (2017). Although this technology has managed to create an astonishing illusion today, it is not yet perfect. But when it does, it is not difficult to foresee that cinema and acting will take on a new dimension. You can check this video if you wanna learn more about how these scenes made.
https://medium.com/predict/netflix-vs-deepfake-the-irishman-1d4754de2701
['Mustafa Yarımbaş']
2020-10-30 13:37:06.860000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Film', 'Netflix']
829
Bitfury Launches Digital Currency Fund in Japan
Bitfury Launches Digital Currency Fund in Japan Alongside Licensed Fund Manager Nippon Angel Investment Company Today the Bitfury Group and the Nippon Angel Investment Company (NAIC) announced the launch of a digital currency infrastructure fund, the first of its kind in Japan. The fund will enable individual and institutional investors to diversify their portfolios into digital currency infrastructure. NAIC is a licensed fund manager in Japan, regulated by the Japan Financial Services Agency, and has conducted thorough due diligence into Bitfury as a reliable investment partner. Despite its distinctive risk-reward profile, institutional capital was previously limited in its ability to enter the digital currency infrastructure sector due to a lack of available vehicles. Through the NAIC digital currency fund, as well as Bitfury’s institutional investor program, these investors can now invest in Bitfury’s top-tier data centers across the world. “We are looking forward to bringing this diverse investment route to investors in Japan,” said Valery Vavilov, CEO and founder of Bitfury. “We believe that this investment, at a time where we are seeing unprecedented market changes and volatility, will help further adoption of digital assets by making its underlying infrastructure more secure.” “This fund brings digital currency investments to Japanese investors at a critical time, when we are seeing risk/return profiles across other asset classes significantly change in reaction to the COVID-19 pandemic,” said Katsu Konno, Head of Bitfury Japan. “We are honored to be moving forward on this offering with the esteemed Nippon Angel Investment Company, providing an avenue to turn Japan’s more than US $9 trillion in cash deposits into a high-return investment.” Bitfury is the largest European emerging technologies company and has been a leading provider of turnkey digital currency infrastructure solutions since 2011. Bitfury’s hardware innovation division, its fully owned innovative cooling subsidiary Allied Control, and its in-house semiconductor and R&D experts have successfully launched profitable bitcoin mining sites around the world, including in Canada, Norway, Iceland, the Republic of Georgia and the Republic of Kazakhstan. Bitfury combined its experience in securing access to low-cost energy, its high-performance hardware design and operational expertise, and the company’s proven track record in institutional deal structure to design this program. Underpinning the program is Bitfury’s worldwide digital currency infrastructure, operating with production costs in the first quartile of the global cost curve and offering a strong financial profile that includes strong resiliency to price changes. To learn more about Bitfury’s digital asset infrastructure investment program, visit www.bitfury.com. The Bitfury Group is the world’s leading emerging technologies company. Bitfury™ is building solutions for the future, with the most significant technologies of the millennium. Founded in 2011, our mission is to make the world more trusted and secure by innovating at every level of technology — hardware and software — to put trust back into the equation. Bitfury’s portfolio focuses on solutions for artificial intelligence, blockchain technology and digital currencies. Bitfury is the leading security and infrastructure provider for the Bitcoin Blockchain. In addition to securing the Bitcoin Blockchain, Bitfury also designs and produces innovative hardware that keeps cryptocurrencies and blockchains secure, including custom semiconductor chips and mobile data centers. Bitfury is also a software provider for some of the world’s most cutting-edge applications through its Exonum™ private blockchain framework, its Crystal™ Blockchain advanced analytics platform and its Peach™ bitcoin payments portfolio. Bitfury Surround™, the company’s music entertainment division, is designing blockchain solutions to address challenges faced by artists and other stakeholders in the music industry. To learn more, visit www.bitfury.com. Bitfury Group Media Contact Rachel Pipan, [email protected]
https://medium.com/meetbitfury/bitfury-launches-digital-currency-fund-in-japan-f76d20d5218b
['The Bitfury Group']
2020-06-10 06:52:42.465000+00:00
['Bitcoin', 'Cryptocurrency', 'Blockchain', 'Technology', 'Finance']
830
NEM Europe was hosted by Insurlab-Germany for a workshop on NEM use cases for Insurance.
Insurlab Germany organized an innovation workshop on blockchain as part of an event series about Blockchain Technology for the insurance industry. Insurlab Germany is part of the DE: Hub Initiative and boasts most of the largest insurance corporations among its members alongside many insurtech-startups. They‘ll be connected which will result in digitization of the industry, and ultimately, innovative insurance products, projects and businesses. NEM Europe Head of Partnerships, Julian Richter introducing NEM Blockchain at Insurlab Germany NEM Europe was invited to pitch the NEM Blockchain and introduce potential use cases for the insurance corporations. Following the pitches of NEM, Gnosis, uBirch and NOS, insurers, consultants and bankers signed up for workshops to discuss and work on different use cases. In order to identify good use cases, Julian Richter — Head of Partnerships, Europe — introduced a decision tree created by ETH Zürich named „Do you need a Blockchain?“ to the workshop participants. Using this as a tool, the following existing problems in the insurance industry were identified: inefficiencies and lack of trust between the insurer, insured person and third parties, and potential improvements through Blockchain Technology. The group worked on existing use case ideas such as claim management improvement, and brainstormed potential new use cases during the afternoon like Atomic Swaps with multiple untrusted parties, then pitched the resulting ideas to all participants. The Insurlab workshop was organized with the goal of coming up with real projects and PoCs. The use case discussions were very productive, and we are very much looking forward to future collaborations. NEM is committed to working on real-world use cases, to solve real-world problems for real businesses. In close conjunction with industry leaders in their own fields of expertise, NEM can achieve this. After all, who knows their business and the issues facing them than those with the experience and know-how to back it up? Be it med-tech, supply chain, track and trace, financial, insurance, e-identity, or whatever — NEM’s focus is on tangible outputs for actual business to gain an advantage through the use of our blockchain technology. Workshops such as this will expedite NEM blockchain adoption, and we would be happy to be involved with events such as these in the future. Do you have a suggestion for us? Please get in touch and let us know!
https://medium.com/nem-europe/nem-europe-was-hosted-by-insurlab-germany-for-a-workshop-on-nem-use-cases-for-insurance-bcc41c3e9119
['Nem Europe']
2018-09-21 09:30:06.987000+00:00
['Blockchain Development', 'Nem Blockchain', 'Blockchain', 'Nem', 'Blockchain Technology']
831
區塊鏈應用分析 - 2. 飲食篇
iOS AR / Tennis / Drone / Editing Videos / Blockchain. I just like many things, although I am not a professional, but learning many things makes me happy. Follow
https://medium.com/turing-chain-institute-%E5%9C%96%E9%9D%88%E9%8F%88%E5%AD%B8%E9%99%A2/%E5%8D%80%E5%A1%8A%E9%8F%88%E5%AF%A6%E9%9A%9B%E6%87%89%E7%94%A8%E6%A1%88%E4%BE%8B%E5%88%86%E6%9E%90-2-4f054b941806
['李昱霆 Jerry Lee']
2019-10-03 14:04:08.155000+00:00
['Business', 'Blockchain', 'Food', 'Technology', 'Bitcoin']
832
A Specification for a Linguistic Computational Companion
Overview Humans have a proclivity and preference for the use of their mouths and other apparatus to attempt an efficient signaling between each other via linguistics and language signs. In other words, much simpler words, humans like to talk. Humans chat. They chit, they chat, they interlocute, eloquate and profligate, more or less. Humans do this so much emerging from their ranks are such fine professions as speech therapists, grammar teachers, linguists, and all other varietals symbolist experts. And being professionals they must produce! Publish! Persist, Insist, Supersist! In giving us Theories of Language, Theories of Word, Universal Grammars, Universal Dictionaries of Ontological Reactionary Symbolic Logistics. Being but peasants in the fields of these titans of speech we must do as they bid. We must band together with our alien linguists — our finger-tied tongue twisted — computer scientists, computer linguists, software programmers, architects of technical symbols, machine gurus of machine learning machines — and teach our computers to speak! To Respond in Kind! To Understand! To Chat! We should not attempt, for it is not The One True Way To Be Most Human, to converse with the machines in paintings, song, dance, tic tac toe, hide n seek, Marco Polo in the pool, nor to whisper, romance, pillow talk, poet proclaim, whisk away. No! Oh no! None of those things could ever reach the Towering Heights of the Towering Babel — Human Language! To Speak is To Know! To Write is To Enlighten! To String Together Sentences In A Metaphorical Displaying of Intellectual Colors is To Be The Uber Human! AI! And so we are resigned to design, program, shape, didactate, inculcate the modern silicon based computer with notions of human linguistics via methods of first understanding Human Language Ourselves, composing a Computer Language Recipe Otherwise Known As A Program, Supply Said Program with Data Sometimes Otherwise Known as Talking-or-Writing To The Computer In Order to Teach It The Language, and finally to Evaluate By Means of Interpretative Numbers About Language and Computers Engaged In the Linguistic Act. Summary of the Overview We will make a chat bot. Why Now? We finally have two signaling systems within our computer platform that provide enough dynamism to power a non-trivial User Experience. We have a more general system that can process signal in time of nearly any analog or digital signal — this system is known as Orage. And it is an evolving computer system that integrates thousands of signals, regulates the signals away from noise and/or repetition and stores dynamic but homestatical signal topologies. These topologies can be rendered by humans and machines as “images.” Additionally we have a Signal Compression Algorithm that attempts to retain the core Semantic Meaning of a giving signal (no matter how small or large) and provide a Lossy, but still Metaphorically Meaningful summary. We have a “General Text Summarizer.” Orage doesn’t seek to specifically maintain semantic clarity, it seeks to identify and maintain larger patterns within short and long timeframe signal flow. The Intelligent Insights (Signal Compression Algorithm) seeks specifically to reliably compress and maintain semantic clarity. These two systems work in concert with each other to allow a never ending flow of signal and to index that flow in useful ways such that a larger system/experience/app/platform using these systems can elastically expand and maintain coherence. For reference it might be useful to think about how the human knowledge codex often expands in scope, is indexed into things like libraries, then search engines, then anthologies and so on. Knowledge itself might be considered an index of indexes, where at any point within the index one can go into the expansion of those indexes into the more raw content (or signal). So Get To The Chat Bot Part First, the easy part. Using the Intelligent Insights Text Summarization API a chat interface (such as a messaging app) or robot etc can maintain a full raw memory of previous chat, an evolving chat and more recent chunks of the chat AND summarize at different time scales. It is useful know how this Intelligent Insights Text Summarizer works. It builds up a huge vector space of word embeddings. In simple terms, it maintains a network of relationships between words/phrases/letters etc that tend to appear in similar ways. Grammar emerges from these embeddings. Variety of language forms etc emerge more sophisticatedly the wider the variety of language in the corpus of data the algorithm is trained on. Though there must be a balance between bias and variance. There are many hyper parameters to tune such that the language embeddings keep a uncertain amount of uncertainty. (The details of this algorithm can be found on the Maslo github) A chat bot powered by this text summarizer over different content/signal/time scales effectively has an elastic linguistic memory at multiple levels. First the Intelligent Insights linguistic network has many different scales of relational memory. Secondly the chat bot maintains a shorter term memory for conversation currently in flight. A third medium term memory should be supplied by building a Summarizer model from PREVIOUS CHATS between the Bot and The User. This can be extended to Group Chats Between Multiple Humans and Multiple Bots. The summarized labels (the highlights) of a chat would be determined by literally asking each bot and each human, randomly at beginning and end of chat, what they thought the chat was about in a sentence or two. The “text” of the chat is the entire chat itself, with and without speaker designations. Now the harder part, how Orage is involved. In two ways. Orage should be observing the chat/get a feed of the chat text, a visual/audio of it if possible/if that makes sense, and integrating the signal flow of the chat. Orage attempts to build a regulation of general signally response. That is, Orage serves as “an internal clock”. This clock comes in the form of signal build up that generates Memories To Preserve. The memories form when enough signal builds up in the flow AND has changed enough (up or down in magnitude) to signal a CHANGE in the overall flow. When Orage forms a memory it is a good indication that the dynamics are changing in a signaling space (a conversation or chat is a signally space). The chat bot can have a constant subscription to Orages memory flow. Those memories can be used by the chat bot to pull from a different timescale of the Intelligent Insights Summarizer — to in effect — confirm or deny the change in the signalling space. The lack of memories flowing in is a sign that the conversation is repetitive or stagnant or has ended. Orage is an affective observer of Schedules of Reinforcement. Additionally Orage memories (and its images) can be used to provide a visual or animated stimulant to the chat bot experience. An alternative use of the information in the Orage memories is to decode the images as animation parameters (such as to define shape of avatar or adjust its breathing movements etc)/. It is of use to review Orage documentation found on https://www.masloplatform.com and on github. It may also be of use to review the psychological research on schedules of reinforcement, particularly Variable Ratio schedules and concepts around contingencies. Third, the tedious part. A UX should be considered carefully for a chat bot. Messaging interfaces are well known but may not be ideal for a robot chat partner. Messaging interfaces have evolved in the consumer product space, but not so much in the corporate and academic space. That is, Snapchat, minecraft, many video games, avatar based environments provide ample dynamic ways in which humans and bots interact multi modally. Summary of All of The Above A Maslo Powered Chat Bot is of a whole different type than the academic and corporate chatbots most commonly found online and in research papers. Maslo takes an approach that while conversations often do have formal content and some formal meanings, the vast majority of conversations are dynamic, referent at different time scales, and performance based dances between people. Most chat bots fail to be compelling enough to ever get enough interaction with a user to truly form companionship. And Maslo believes that if a chat bot is to be trusted to learn and provide FORMAL CONSEQUENTIAL ANALYSIS, PREDICTION and PRESCRIPTION it must first be able to chit chat. Summary of the Summary We are going to combine our systems to date into a simple, delightful experience and have lots of bots and humans interacting. We will adapt together until we all share a a common language -linguistically, visually, audibly and gesturally.
https://medium.com/maslo/a-specification-for-a-linguistic-computational-companion-fd0f6b8a47da
['Russell Foltz-Smith']
2020-04-11 17:36:01.826000+00:00
['AI', 'Chatbots', 'Linguistics', 'Maslo', 'Technology']
833
Why crypto-exchangers attract investors
According to rumors, the South Korean investors bought crypto-exchanger Bitstamp for $ 400 million. Crypto-exchanger is at the 14th place at the daily trading volumes ($ 122,369,567). Judging by Bloomberg’s research, Bitstamp’s daily profit is about $ 400,000. The goals of new investors for the asset are still unknown, but if one leaves it as it is, the return on investment will take about three years. Earlier in late February 2018, American payment company Circle bought the crypto-exchanger Poloniex. The purchase amount is unknown. According to Bloomberg, the daily profit of Poloniex is about $ 60 thousand. Circle has big plans for buying: the company wants to make crypto-exchanger part of the global cryptocurrency infrastructure. A lot of large investors did not have time to enter the crypto-market on a low start. Today, cryptocurrencies have become a more understandable tool, and their legalization is not far off. But investing in mining or even coins for the purpose of growth is meaningless. The main profit potential is the infrastructure. And the crypto-exchanger is here — the main objects for purchase. So new deals are just around the corner. Subscribe to be the first to read news of blockchain and crypto industries & join our Telegram channel.
https://medium.com/ico-crypto-news/why-crypto-exchangers-attract-investors-4e4b6f6d2ca0
['Ico', 'Crypto News']
2018-03-26 19:43:05.410000+00:00
['Blockchain', 'Technology', 'Cryptocurrency', 'Ethereum', 'Bitcoin']
834
How To Keep Your Car Running Smoothly In A Pandemic
This year has been entirely unique thanks to COVID-19. Our modes of interaction have completely changed. For a lot of us, our daily drives to the office were cut short in favour of remote work systems, and we no longer had to pick our kids from school thanks to online classes. As a result, our cars haven’t enjoyed or received as much attention as they used to. An idle and unmaintained car can be a huge source of danger and we at Carhoot took it upon ourselves to share a few COVID-19 car care tips before we embark on our holiday road trips & adventures: Keep It Clean It’s really important to keep your vehicle clean, especially if you are going to leave it for long periods of time. Our cars collect tons of germs and dirt, and this can create serious health risks, not to mention, affect the longevity of your car. Research shows that our steering wheels are the dirtiest parts of our cars and with COVID19, maintaining clean surfaces is a life-saving necessity. Dirt on your car can: Limit your visibility. The headlights are subject to all the dirt, road grime, insects and light as the front end of your vehicle, which can quickly eat into the lens surface. This can severely reduce the spread of the headlight beam, making it harder to see and be seen. Lead to corrosion and rust. Remove enough paint, and the underlying metal will be directly exposed to water and moisture in the air, leading to rust. Even if all the paint is intact, there will still be small unpainted areas at the edges of panels that are still susceptible, especially when salt is present Breaking Through Clearcoat. Modern car coatings are made up of a primer, a colour coat, and a clear coat. Breakthrough the layer of clearcoat, and your car’s paint will age a lot faster. The clear coat is there to protect the underlying coloured base coat from UV light and physical damage so your car stays the same colour for years. Once that protective layer is removed, the colour will bleach out from light exposure, while it will wear down faster because it isn’t as abrasion-resistant as the topcoat. Primer is designed to adhere to the metal and provide a surface for the paint. If it’s exposed, it will draw in moisture, further accelerating paint damage A car cover is a worthy investment and will help protect your car from the elements, especially if you will leave it unused for extended periods. Make sure you get it thoroughly cleaned and pay attention to all those little hard to reach spots. PRO TIP: Spray any unpainted metal with undercoating to help protect against rust and plug in some steel wool in your exhaust pipe to prevent little insects & critters from messing up with your car. You can also use the Carhoot App to request for detailing before storing it. Keep Your Tires Healthy When it comes to tires, vigilance is key. Under-inflation is one of the leading causes of tire failure. If tire pressure is too low, too much of the tire’s surface area touches the road, which increases friction. Increased friction can cause the tires to overheat, which can lead to premature wear, tread separation and blowouts. Flat tires are a constant danger on our roads and one of the ways you can develop a flat is if you don't use/move your car as often. You can keep your tires in tip-top shape by: Filling them with nitrogen prevents air from running out quickly and running flat. Remember in Chemistry class, when we were taught Nitrogen is heavier than Oxygen…well, facts like these come in handy when keeping your tyres in great shape! Rotate your tires regularly…you can set reminders on your mobile devices or Google Calendar to make sure you don’t forget! Make sure your tires are inflated to the recommended air pressure levels as per your manufacturer and be careful not to exceed the recommended limit. Don’t engage the parking brake if you are going to keep your car in storage, as it can become “frozen” and difficult to disengage. If you’re worried about your car rolling, get some wheel chocks or blocks of wood to wedge against the tires. If you have access to the right tools & space (flat and preferably concrete floored storage space/garage) you can consider raising your car to take off weight from the tires and suspension. Lastly, always check ALL FIVE TYRES before going anywhere. Even your spare tire needs some TLC before any road adventure. PRO TIP: Make use of the Carhoot App to replace your old tires with high quality and verified tires, and get your tires balanced. Check Your Fuel, Fluids & Power Cars use a lot of fluids to function properly. A car generally has around 6 primary fluids aside from fuel. If you are not going to use your car as often or even at all, you still need to keep up with your car’s recommended fluid maintenance checks. Keep your car running smoothly by: Keeping your fuel tank full when storing your car. This will help prevent moisture from building up in the fuel tank. Fuel can last up to three months but adding an ethanol-based stabilizer can help keep your fuel lines and engine safe from corrosion. Check all your fluid levels (engine coolant, transmission fluids, brake fluids, antifreeze, etc )and change them as needed. If you live in colder areas like Limuru & Kericho, make sure you keep a close eye on your antifreeze levels. Change your engine oil often. It is also recommended that you run fuel through the system every once in a while to keep your engine healthy. Don’t forget to change your oil filter too! Your car battery will lose charge (discharge) if it isn’t driven in a couple of weeks. All you need to keep your battery running is to connect it to a trickle charger or battery tender with an automatic shut-off feature or float mode. This will make sure that your battery doesn’t get overcharged. The battery can remain in your car or be removed while it’s hooked up to the battery tender. PRO TIP: Keep a full fuel canister in your garage or on hand, and remember to top off your fuel tank every time you store your car. Alternatively, you can make use of the Carhoot fuel delivery feature if you run out and don’t want to leave home. We hope you learnt a few neat tricks and drive safe this Festive Season! #CHOOSEJOY!
https://medium.com/@carhoot/how-to-keep-your-car-running-smoothly-in-a-pandemic-c05996c0e0f0
[]
2020-12-16 11:23:09.454000+00:00
['Tips', 'Technology', 'Automation', 'Africa', 'Cars']
835
Rethinking self-interest
In the late 1990s, people around the world began to live in a state of rising fear of two missing numbers. The computer bug known as Y2K threatened to wreak havoc on the global infrastructure through the tiniest of details: computers being programmed to represent years in two digits (“99”) instead of four (“1999”). Headlines warned that systems would go haywire — crashing planes, freeing prisoners, and potentially leading to “The End of the World as We Know It?” as a 1999 Time Magazine cover posed. We laugh at Y2K today like it was just another Skidz-like ’90s fad, but that’s only because computer scientists successfully fixed the bug. (The immovable deadline helped: computer scientists had raised alarm over this exact issue since the 1950s but it took until basically the night before for anyone in charge to do something about it.) Though it has yet to make headlines, our world today faces even greater threats — also because of incomplete information. The kinds of things alarmist Y2K articles warned about are actually happening right now because of it. The problem in our case isn’t some faulty code. It’s a critical, out-dated assumption. In the interests of self-interest Our story begins — where else? — with the origins of capitalism and Adam Smith’s famous observation: “It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own self-interest. We address ourselves not to their humanity but to their self-love, and never talk to them of our own necessities, but of their advantages.” On this point, Adam Smith was absolutely right. Expecting and encouraging people to act out of their own self-interest will produce better results than imploring them to do something for some other cause, however noble. When it comes to capturing what’s actually in our self-interest, however, this observation inadvertently inspired significant harm. Over the course of the 20th century, society’s concept of self-interest became more and more bound to our short-term individualistic desires. Self-interest is instant gratification — what you as an individual want right now. The wave of consumerist individualism began with the Baby Boomers (leading to the so-called “Me Decade” of the ’80s) and Millenials are the predictable sequel. This shift is more than a media fiction. We can track the rise in individualism in everything from the increase of singular pronouns versus collective pronouns in song lyrics to the decline of bowling leagues to the personalized feeds we spend hours in today. Our lives are becoming more atomized and our future timelines are shortening. Our view of self-interest has become so specific it even has a logo: the hockey stick graph. A chart where whatever we want — money, power, followers — is growing so fast the line slopes up and to the right. We’ve convinced ourselves this is life’s best-case scenario. In reality it’s just a small slice of a much bigger picture. When we extend both axes on the graph, a very different image emerges. From this we can map out four distinct spaces of self-interest. There’s Now Me. What I as an individual want and need right now. This is how we see self-interest today. There’s also Future Me. What the older, wiser version of you wants you to do. The person you become is defined by your actions in the moment. There’s Now Us. Your friends and family and the communities you’re a part of. Your decisions directly impact them, just as theirs impact you. There’s also Future Us. The community you belong to even though you haven’t met the people in it yet. Your kids, other people’s kids, the older versions of ourselves that face an uncertain future. All of these spaces are in our self-interest. Not just Now Me. This theory is called Bentoism, an acronym for BEyond Near Term Orientation. Why Bentoism matters For decades we’ve operated like Now Me is all there is. We’ve maximized comfort, pleasure, and financial gain while actively avoiding sacrifice of any kind. We’ve kicked so many cans down the road there’s now a giant wall of them that we’re barreling into head-on. It’s not that there’s no solution. It’s that we keep trying to solve every decision according to the needs of just one piece of the puzzle. Our systems are built on models that see people as individualized consumers that reduce the range of human possibilities down to the optimization of financial value. People have built truly amazing machines to do these things. But while humanity’s Now Me is a giant glimmering skyscraper (with extraordinary amounts of homelessness), its Future Me, Now Us, and Future Us look more like the summer disaster movies we escaped into so we could tune out the bad news our disinterest further fueled. Despite all of this, I’m optimistic about humanity. I believe people do the best they can with what they have and what they know. The question is what don’t we know, and how can we gain it or become more aware of it? More clearly defining our self-interest — the playing field we agree on as in-bounds for our decisions — is exactly this kind of awareness adjustment, and one that can drive fundamental shifts on both the individual and societal levels. This would be a big change, but changes of this level happen all the time. They just take time to happen. Thirty years, give or take. In our case, that means working to redefine our map to self-interest by 2050. Why 2050? Because profound changes in social values happen in generational increments. (In my book I write about how everything from modern medicine to exercise to hip-hop went from nowhere to mainstream in thirty years.) The people leading the world in 2050 will be Millennials and Generations Y, Z, and COVID. Groups with very different ways of seeing the world than those now in charge. A falling empire will give the 2050 generations the unfortunate responsibility and opportunity to lead humanity’s most dramatic evolution in more than a century. The overwhelming majority of these people recognize our current path is a dead end. What they lack is vision for what to build instead. The Bento is a map to our new world. Creating new systems and refactoring existing ones to reflect this new map is critical work. Here’s how I described it in my book, which closes with a snapshot from a sci-fi future: “In 2050 a Bentoist view of value is a real thing. People better understand their values and live more self-coherent lives. Companies hold themselves accountable to a wider set of values that they take as seriously as their profitability. Slowly but surely over the course of thirty years, a belief in rational value beyond financial value becomes normal. “As the Bentoist approach to value emerges, talented people become drawn to its unique challenges. Using your skills to maximize financial value seems like a waste when a whole new frontier of value awaits.” A year into the journey, this vision is starting to become real. The Bento Society Over the past year, Bentoism has become more than a theory. It’s become a community of people and a laboratory for experimentation. Its name is the Bento Society. The Bento Society hosted more than 100 workshops for thousands of people from around the world this year, its first. One member, Julian, describes it as “a welcoming space for people to rejuvenate themselves and co-imagine the world together.” In these sessions people actively confront and adjust how their beliefs, values, and lives come together. They push at the boundaries of their self-interest. Here’s what members say about it: “Ever since I created my first bento, I knew this was the community and space for me because I feel like I’m contributing to something bigger than myself. I consistently leave our time together feeling refreshed and motivated for the week ahead. Bentoism has simply beautified my life, inside and out.” “It’s helped me feel more confident and less alone when looking at the current state of the world and less stuck about certain decisions.” “It literally changed my life. I feel like now I have a focus beyond the present. It makes me think beyond today and see life from another perspective.” “It’s given me lots of clarity and a great framework to make important decisions that I struggle with. The interactions I’ve had during bento events have been super meaningful!” “Bento has helped me realize that I’m not yet very clear about what kind of future image I have of myself and the society I want to live in. Bento is currently helping me to interpret this nebulous image of Future Us and Future Me and to adapt my current actions accordingly.” “I am conscious of what I am creating in a holistic sense I am connected with the reality around me and by default am contributing sustainably. This ‘feeling a part of the whole’ is comforting and cleansing at the same time.” “Being able to sit and really think about and be accountable to all aspects of my now and future selves is time I now treasure in my week. Maybe changing the world is in how we all live our lives and not the preserve of a select few.” “Bentoism helped me begin to unearth the broader sense of values that I have that exist outside of commercial consumerism and my existence being defined by my daily career.” Bento Society members come from all around the world and every walk of life. We are retail workers and artists. Students and professors. CEOs and customer service workers. Health care workers and filmmakers. Scientists and Uber drivers. The Bento Society’s Mission As important and life changing as this work is, the goal of Bentoism isn’t just to help people better see what’s valuable and in their self-interest. The Bento Society’s mission is to redefine what the world sees as valuable and in its self-interest. Our goal is for this perspective to become the new default. There are three parts to our work: 1. Teach people Bentoism and create a welcoming space where they can practice, explore, and create self-coherence. We do this now with our Weekly Bento on Sundays, smaller Group Bentos on Wednesdays, a Slack community of several hundred people, and in newsletters to a couple thousand people. This work will grow and evolve to make the Bento as useful and accessible as possible. 2. Introduce Bentoism to organizations, community groups, and other collective structures through existing members. The next phase is the adoption of the Bento as a decision-making and priority-setting tool in organizations. The new Bentoism website has a section devoted to this with real world examples. The goal is to equip Bento Society members to shift their own organization’s maps in Bentoish directions. 3. Lead, fund, and support projects that establish a wider map to value and self-interest. We’re heavily inspired by Thomas Kuhn’s idea of “normal science.” That in the wake of paradigm change, new ideas become useful once the process of “normal science” happens. Kuhn defines normal science as the iterative, “puzzle-solving” work of applying a theory to individual fields of study. As individual scientists run experiments across a variety of contexts we learn how the new paradigm practically works. What had been a political debate over knowledge becomes practical and factual, and the new paradigm becomes adopted. The Bento Society plans to push the normal science of defining new values and a larger map to self-interest. This will start with community-supported grants for projects that expand how we define value and self-interest, which we’ll announce later this year. If you’d like to tell us about something you’re working on in this spirit we’d love to hear about it. A map to the new world In the lead-up to the year 2000, society became acutely aware of how dependent its systems were on faulty code. The Y2K bug turned what had been invisible and irrelevant into a major part of life. COVID-19 has similarly made us aware of the flaws in our systems and thinking. Social distrust, active undermining of collective norms, and a weak public health infrastructure are all the disastrous consequences of decades of under-investment in Now and Future Us. The pandemic has made clear which societies are limited by short-term individualism and which are not. The societies who are successfully navigating the pandemic are ones that have invested in Now and Future Us. Denmark put their economy and way of life into a temporary freezer at the start of COVID so it could be preserved for unthawing later. New Zealand’s high social trust has resulted in a society essentially free of the virus. In Asia, COVID has been more of a speedbump than a dramatic reset. These are the truly developed societies. These are the societies whose maps of the world remain intact. Struggling nations like the US and UK are lost because their existing map to the world — dominated by the pursuit of financial gain and Now Me desires — has no relevance to where we find ourselves. Economic growth can’t cure disease. Public health can’t be protected in societies led by governments that believe society doesn’t exist. These old maps have even less relevance to the challenges we face in the years and decades to come. But we can solve this. Shifting how we see self-interest is a scalable solution. It fundamentally changes our relationships to one another without infringing on personal beliefs. It imposes no values beyond an increased awareness of ourselves and each other. Yet it dramatically changes the context and substance of our decisions. Like Adam Smith’s OG ideas, Bentoism relies on each person looking out for their own self-interest. But it also, in the simplest of ways, expands the perimeter of our self-interest to include each other and our future selves. The Bento is a map to our new world. Notes The softcover of my book, This Could Be Our Future: A Manifesto for a More Generous World, comes out on November 19 with a new afterword and cover: Preorder here.
https://ystrickler.medium.com/rethinking-self-interest-e125b220ca8f
['Yancey Strickler']
2020-10-23 17:10:47.754000+00:00
['Future', 'Technology', 'Values', 'Self']
836
Different Kinds of Quantum Computers
A quantum computer, taken from the Guardian, published at Sat 28 Sep 2019 What is Quantum Computing and why has it been in the news so much lately? What can be done with Quantum Computing and how is it done. Today, I will break down, what Quantum Computing is and the different kinds of Quantum Computers, that I have encountered. Quantum Computing is a practice in which Quantum Computers can process massive complex data more efficiently than classical computers. They use the fundamentals of quantum mechanics to speed up the process of solving complex computations. When you explain that in an easier way it can also be explained as Quantum Computers help us understand data that normal computers cannot obtain. With the help of Quantum Computers, the complex data is accessed and solved easier and faster. The difference between conventional computers and quantum computers are very big. A conventional computer is based on the classical phenomenon of electrical circuits being in a single state at a given time, either on or off. They use binary codes to represent a problem in the form of 0s or 1s. Quantum Computers are based on quantum mechanics, which is a theory that provides the scale of atoms and subatomic particles as a description of the physical properties of nature. Quantum Computers use qubits. They use 0,1 and superposition state of 0s and 1s to represent information. So basically, Quantum Computers make the life of scientists/engineers easier. Quantum computers at the moment, are very big and very complexly built. Quantum Computer Let's dive into the different versions of Quantum Computers, that are seen the most. Quantum Annealing Quantum Annealing is best for solving optimization problems. Researchers are trying to find the best (most efficient) possible configuration among many possible combinations of variables. Quantum annealing is the least powerful and most narrowly applied form of quantum computing. Quantum Annealing is all about having the best answer. Computers go through the possibilities and see which one fits the problem the best, with the least amount of energy waste of the system. Quantum Simulations Quantum simulations explore specific problems in quantum physics that are beyond the capacity of classical systems. Simulating complex quantum phenomena could be one of the most important applications of quantum computing. Universal Quantum Computers Universal quantum computers are the most powerful and most generally applicable, but also the hardest to build. The basic idea behind the universal quantum computer is that you could direct the machine at any massively complex computation and get a quick solution. They can be programmed to run quantum algorithms that make use of qubits’ special properties to speed up calculations. Quantum Computers are very intelligence computers that help us make the day to day life easier.
https://medium.com/@gizemilaydaozturk/different-kinds-of-quantum-computers-993dad6d0e2c
['Gizem Öztürk']
2021-02-09 00:11:42.906000+00:00
['Physics', 'Quantum Computing', 'Emerging Technology', 'Technology', 'Quantum']
837
Node.js’s ‘fs’ Module: Writing Files and Directories
fs.Write There are two versions of the write functions, one for writing text to disk and another for writing binary data to disk. The text version of write function lets us write text onto the disk asynchronously. It takes a few arguments. The first argument is the file descriptor — a number that identifies the file. The second argument is a string that’s written to the file. If the value passed in is not a string, it is converted to one. The third argument is the position in which the file writing starts. If the value passed in isn’t a number, then it starts in the current position. The fourth argument is a string that has the character encoding of the file to be written, which defaults to be utf8 . The last argument is a callback function with three parameters. The first is the err object which has the error object and it’s not null if there’s an error. The second parameter is the written parameter, an integer which specifies how many bytes are written to the file system. It’s not necessarily the same as the number of string characters written. The third parameter is the string parameter that has the string that was written. On Linux, positional writes don’t work in the append model. On Windows, if the file descriptor is 1, which stands for the standard output, then strings that have non-ASCII characters won’t be rendered properly by default. To use the write function, we can use the open function to get the file descriptor of the file you want to write to first, then we can write to the file by passing in the file descriptor to the write function. For example, if we want to write to the file with the path ./files/file.txt , we can write something like this: const fs = require("fs"); fs.open("./files/file.txt", "r+", (err, fd) => { if (err) throw err; fs.write(fd, "abc", 0, "utf8", (err, written, string) => { console.log(err, written, string); fs.close(fd, err => { if (err) throw err; }); }); }); When we run the code above, we should get output that looks something like this: null 3 abc In the code above, we first open the file with the open function. We pass in the r+ flag so that we can write to the file. Then we get the file descriptor fd in the callback function that we passed into the open function. With the fd file descriptor, we can pass it into the write function. In the second argument of the write function we specified that we want to write the string abc to the file. In the third argument, we specified that we want to write it at position 0, the fourth argument specifies that the character encoding of the string should be UTF-8. The callback in the last argument would get us the result of the write. From there, we know from the output that three bytes and the string ‘abc’ were written to the file. Other than the r+ flag, there are many other possible system flags, including: 'a' — Opens a file for appending, which means adding data to the existing file. The file is created if it does not exist. — Opens a file for appending, which means adding data to the existing file. The file is created if it does not exist. 'ax' — Like 'a' but an exception is thrown if the path exists. — Like but an exception is thrown if the path exists. 'a+' — Open file for reading and appending. The file is created if it doesn’t exist. — Open file for reading and appending. The file is created if it doesn’t exist. 'ax+' — Like 'a+' but an exception is thrown if the path exists. — Like but an exception is thrown if the path exists. 'as' — Opens a file for appending in synchronous mode. The file is created if it does not exist. — Opens a file for appending in synchronous mode. The file is created if it does not exist. 'as+' — Opens a file for reading and appending in synchronous mode. The file is created if it does not exist. — Opens a file for reading and appending in synchronous mode. The file is created if it does not exist. 'r' — Opens a file for reading. An exception is thrown if the file doesn’t exist. — Opens a file for reading. An exception is thrown if the file doesn’t exist. 'r+' — Opens a file for reading and writing. An exception is thrown if the file doesn’t exist. — Opens a file for reading and writing. An exception is thrown if the file doesn’t exist. 'rs+' — Opens a file for reading and writing in synchronous mode. — Opens a file for reading and writing in synchronous mode. 'w' — Opens a file for writing. The file is created (if it does not exist) or overwritten (if it exists). — Opens a file for writing. The file is created (if it does not exist) or overwritten (if it exists). 'wx' — Like 'w' but fails if the path exists. — Like but fails if the path exists. 'w+' — Opens a file for reading and writing. The file is created (if it does not exist) or overwritten (if it exists). — Opens a file for reading and writing. The file is created (if it does not exist) or overwritten (if it exists). 'wx+' — Like 'w+' but an exception is thrown if the path exists. The binary version of the write function lets us write text onto the disk asynchronously. It takes a few arguments. The first argument is the file descriptor which is a number that identifies the file. The second argument is the buffer object which can be of type Buffer, TypedArray or DataView. The third argument is the offset , which determines the part of the buffer to be written. The fourth argument is the length argument that specifies the number of bytes being written, the last argument is the position which is an integer which describes the position in which the write function will start writing. The final argument is a callback function — a function that takes the err parameter, which has the error object. If an error occurs, the second is the bytesWritten parameter which gets us the number of bytes written to disk, the third is the buffer object, which has the binary data which was written to disk. For example, we can use it as in the following code: const fs = require("fs"); fs.open("./files/binaryFile", "w", (err, fd) => { if (err) throw err; fs.write(fd, new Int8Array(8), 0, 8, 0, (err, bytesWritten, buffer) => { console.log(err, bytesWritten, buffer); fs.close(fd, err => { if (err) throw err; }); }); }); We get the following output if we run it:
https://medium.com/better-programming/node-js-fs-module-writing-files-and-directories-da70190376c
['John Au-Yeung']
2020-03-18 03:01:10.782000+00:00
['Technology', 'Nodejs', 'Programming', 'JavaScript', 'Software Development']
838
Top 5 Trends in Facial Recognition That Will Dominate in 2020
Facial recognition technologies — the biometric ability to identify and verify a human face from a digital image — are being rapidly deployed across a diverse range of business industries. Notwithstanding the obvious security benefits, facial recognition technologies’ ever improving accuracy, speed of response and ubiquitous rollout has provoked a degree of unease in some quarters — especially in terms of consumer privacy. What is facial recognition, and how does it work? Facial recognition technologies, as the name itself suggests, is a process whereby an individual can be identified by capturing, analysing and comparing patterns on the individual’s face. It’s a specific kind of image recognition service that consists of the following: Face detection — identifying and locating human faces in videos and images Face capture — transforming the analogue information of a face into data, according to the facial features (i.e. spacing of the eyes, contour of the lips, chin, bridge of the nose) Face match — verifying whether or not two faces belong to the same individual Top Facial Recognition Technologies The established global technology giants are vying for the #1 spot: Google, Apple, Amazon, Facebook and Microsoft are the recognised key players in this biometric innovation. In 2014, Facebook launched DeepFace — a program that can determine with an accuracy rate of 97.25% whether two faces belong to the same person In 2015, Google launched FaceNet — an accurate facial recognition system that is used by Google Photos to identify, sort, and tag people in pictures In 2018, Ars Technica highlighted that Amazon’s facial recognition software ‘Rekognition’ can identify up to one-hundred people in a single image and quickly cross-check with databases to deliver precise results To understand better where the technology is heading, we’ve looked at the top 5 trends in the future of facial recognition services for 2020. 1. Booming Markets According to a recent research report on the facial recognition market by Markets and Markets™, the global facial recognition market which in 2019 was worth $3.2 billion is expected to grow to $7 billion by 2024 at an annual compounded growth rate of 16.6%. The key drivers of this projected phenomenal growth are an ever-increasing user base, evolving government security strategies, and the increasing need for fraud detection and mobile device proliferation. Where once facial recognition software was a technology advancement used only by militaries and intelligence agencies, the wider spread of facial recognition — and the availability of free software with facial recognition capabilities — means that it’s use by the general public is bound to transform how we understand everyday transactions. Health Facial recognition has an essential role to play in the health sector. Did you know that facial recognition systems can help to accurately track the use of medication by parents, support pain management procedures as well as help detect genetic diseases? Especially in the field of public health, this ability to track patients and their routines becomes ever more relevant — especially if they’re carriers of contagious diseases. That’s why for epidemiologists, the wider spread of facial recognition systems may eventually help them contain the threat of an epidemic. Although still in its infancy, the technology is set to rapidly transform the health sector over the coming decade. Marketing and Retail If shopping experiences can be tailored to suit individual customers’ needs more readily, consumers — and companies — can benefit. Amazon and Facebook are 2 of the largest companies using facial recognition technology in retail today. Through the use of facial recognition systems, these companies use collected data to pitch relevant products and services as well as analyse shopping behaviour in real time to make their offerings ever more customer-centric. This particular ability of facial recognition systems is bolstered by advances in technology. Most prominently, the ability of these systems to also be capable of identifying features like age group or gender, just from a picture of a face, makes targeted marketing ever easier to achieve. For instance, facial recognition systems in vending machines can share with providers what products are selling — and who’s buying them. What’s more astonishing is the concept of selfie payment. Since 2017, KFC — the American fast food company — and Alibaba — the Chinese multi-national conglomerate — have been testing a facial recognition system for self-payments. This means that rather than credit or debit cards, financial transactions may just be validated through facial recognition systems — imagine walking into any ATM, and being identified as an account holder with just your own face. In previous years, the advent of card-based transactions made those optimistic about the future of finance and banking believe that in their lifetimes, cash-based transactions would eventually become obsolete. Now, the possibility is not just of a cashless society, but one without cards too. Face ID might just be the requirement needed to validate a financial transaction. 2. Embracing Deep Learning Artificial intelligence and deep learning — a specific branch of machine learning whereby a system continuously learns from data to improve incrementally — together make up two of the most relevant emerging technologies for the future of facial recognition systems. These technologies have been pivotal to the growth of facial recognition in 2020, and are essential for facial tracking, facial detection and facial matching. As such, significant improvements in facial recognition systems are predicted in the near future. A 2018 report by NIST highlights that in the 5 year period leading up to the report, there was a considerable increase in the accuracy of facial recognition systems that was substantially higher than the improvements made in the period 2010–2013. The next 2–3 years, the report goes on to say, will be even more important for the development of facial recognition technologies through advancements made on the back of artificial neural network algorithms — networks that after a learning phase become capable of giving a correct output value (or Result) after processing various input values. Given that deep learning also entails these recognition systems will be programmed to improve with more and more exposure and experience, rapid improvements in the way these systems work is likely. 3. Mapping New Users China and India are experiencing rapid growth in the use of facial recognition technologies. Today, the U.S is home to the biggest facial recognition applications market. According to Reuters in 2018, security officials in Beijing, China tested smart glasses that used facial recognition technology to identify suspects in real-time. Building on these findings, the New York Times reports that China is working with a number of revolutionarily A.I companies including SenseTime, Yitu, and CloudWalk to set up and perfect a facial recognition camera and video surveillance system nationwide. In India, the Adhaar project — the biggest biometric database in the world — is now being further upgraded as the feature of facial authentication is included. This could mean that close to 1.2 billion people- the entire population of India- will be facially recognised by systems, devices and software. Unfortunately, this dimension of facial recognition systems has been subject to severe criticism, especially in the ways governments around the world have used such systems to persecute citizens within their countries. At most risk are undocumented migrants, or even refugees — as few nations have opted to naturalize people who enter their countries on a “refugee” status, this would mean governments now have more power to keep checks on who is and who isn’t a national of their countries. This aspect of facial recognition may also empower governments to pursue ethnically discriminatory policies, although as of yet, only China seems to have used facial recognition to such unusual extremes. 4. Boosting Security While facial recognition-based logins and facial recognition for online services are becoming well established, the use of facial recognition for detecting and in turn, preventing crime is regarded as the most important application of this emerging technology. From being used at borders and high-risk locations such as government buildings, airports and nuclear power plants to fast-becoming the integral component of the security detail of commonplace buildings everywhere be it a local business, multi-national supermarket or library, facial recognition is a recognised deterrent designed to help boost security. Even in office buildings that require clearance for certain sections, facial recognition systems are being introduced to further protect corporate secrets. Through facial recognition, law enforcement agencies can recognise individuals with a past criminal record as well as identify those looking to engage in suspicious activities that could lead to an unwanted security breach. This in turn allows organisations to speedily undertake necessary actions to effectively protect the safety and security of its people as well as its physical assets. In the United States, schools are mulling using facial recognition services to alert authorities whenever expelled students or known criminals enter such facilities. Agencies all over the world have also started using such software to track people who are reported “missing”. 5. Facial Recognition for Content Moderation With the combined effectiveness of deep machine learning and the expansion of facial recognition services for the market in general, this kind of software is primed to also function within social media and the internet as a “moderating” force. This would mean that with facial recognition software, webpages can quickly catch images that are violent or inappropriate on their pages. Besides just ensuring that the internet remains a safer place for everyone, it also acts as an inhibitor for those that wish to use social media accounts to incite violent or dangerous activity. As one social media giant, Tumblr, has already been forced to remove all adult content from their page because of the abundance of child pornography on the website, facial recognition ensures such circumstances are less likely to arise in the future. That’s also a testament to the power of face ID within these image recognition services, with the capability of determining the “graphic” nature of photos and developing an estimate of both the age and gender of the subjects photographed. The Future is Facial Recognition With the evolution of facial recognition technologies and their diffusion throughout a wider stratum of society, such systems are slated to become a common aspect of everyday life. Science-fiction films may have been the first to pioneer their usage as verification devices, but now that even smartphones have built-in facial recognition capabilities, every face is effectively also a source of personal data for all future machines. It’s fair to ponder over how these systems will compromise people’s privacy. If databases owned by corporations, governments, and professional organizations all carry information about people’s facial information, much more information about the average individual would be recorded. Since there’s little precedent of the amount of consumer privacy that would become part of the market, digital rights activists and their concerns are well-grounded. What’s also worth remembering is that individuals already upload vast amounts of personal data to social media accounts. It’s data, that like facial features, would have once been conceived as private information, but is now considered vital by businesses in their attempts to target specific kinds of audiences. Already in the way the world is structured, consumer privacy is increasingly being undermined as a concept. In the face of the huge security and law enforcement advantages facial recognition can privilege its users with, its threats to consumer privacy are difficult to use compellingly against their usage. It’s difficult to surmise whether this is ultimately for the worse, but the advantages that these kinds of services can result in are very real. Only the future will tell if this kind of technology will ultimately empower people and businesses.
https://medium.com/rezaid/top-5-trends-in-facial-recognition-that-will-dominate-in-2020-9f867af8572b
['Junaid Dar']
2020-06-01 12:03:02.834000+00:00
['Facial Recognition', 'Digital Marketing', 'Digital Transformation', 'Technology', '2020']
839
Mesh Networks — Improving Wi-Fi Access And Connectivity
If you have property with more than 1 floor, a basement, an attic and a large area (> 3,000 square feet), getting Wi-Fi access can sometimes be a challenge. The signal coming from a wireless router can be limited to a short distance (common with 5 GHz signaling) and cannot penetrate through thick walls. As a result, it would require users to either be in closer proximity to the router or use wireless repeater (also called range extender) access points. A townhouse, mansion, warehouse office or large estate would usually come to mind. It is often difficult to get Wi-Fi access because of the distance the signals need to travel. There are also many obstacles that can block the signal and reduce its strength, so users are not able to get reliable Internet. In most cases, you can get a much stronger cellular network signal (e.g. 4G LTE) for Internet access. That can be quite expensive for users on a limited data plan with their network. Mesh networks provide an ideal solution for that problem. A mesh network uses multiple devices for Wi-Fi access rather than just a single router. The devices are interconnected with one another, and provide access to the network all throughout a location. This allows users access to areas where the signal from a single router could not reach. It still supports the current Wi-Fi standards like 802.11ac dual-band (2.4 GHz, 5 GHz) or 802.11ax (Wi-Fi 6). The communication among the nodes and the router can use a different radio channel. Figure 1. A mesh network can support multiple devices across a large space. It is ideal for connecting IoT devices in smart homes and offices. Wi-Fi Connectivity Issues Some have tried BPL (Broadband Over Power Line) with good results. The BPL device connects to the router and transmits the signals using existing power line circuits. However in larger setups, BPL may not travel the entire grid to deliver data signals. This is because the signal transmission can be affected at some point in the circuit. These are due to transformers in the signal’s path. Figure 2. It can be frustrating trying to get a good Wi-Fi signal when you are far away from the router. (Photo Credit by Yan Krukov) Wireless repeaters are often used to extend a wireless router. The problem is that repeaters can be intermittent at times. Since they were originally designed to repeat a signal to reach the user, it can be affected by external sources. What they do is just extend the signal from a wireless router. The signal’s strength is not preserved, and it can degrade due to obstacles that can hinder the signal (e.g walls, floors, ceilings, metal, concrete) and also from electromagnetic interference (e.g. radio signals, microwaves). Mesh Networks Mesh networks offer a better solution. By using beamforming, Wi-Fi signals can be concentrated using antenna arrays to deliver more signal to the device rather than radiating outwards which weakens the signal strength. This is not exclusive to mesh networks alone. Beamforming is also used in 5G networks, to transmit signals between small cell site antennas at a distance of not less than 500 feet apart. There are other Wi-Fi implementations that use beamforming. This improves the transmission of signals which means more reliable bandwidth and faster speeds. The main purpose of a mesh network is to provide consistent wireless coverage. In a mesh network system, there can still be a main device that functions as a wireless router. This is where the Internet connection is coming from. The signal from the main device is then sent out to satellite nodes, which provide a hop to other nodes to provide a strong Wi-Fi signal. Users then connect to the node that is closest to them with a boosted signal as if they were right next to the actual wireless router. Figure 3. In a mesh network topology, each node n is connected to every other node (n — 1). For 5 nodes, each node has 4 connections c. The entire network has a total of 20 connections (n² — n / 2) or n * c. From the diagram you have a total of (n * c) / 2 lines of communication or 10 lines. If one node were to go down, you still have multiple paths to access the network. The mesh network uses an ad hoc topology. In this setup, it does not require a centralized access to the network. The original architecture was meant to be decentralized, so that if one node goes down the network can still function. Retail mesh network implementations are ad hoc, but not fully decentralized if it requires a main device for the Internet. Each node can access the Internet independently, so it is decentralized. However, this still requires a main device to provide the Internet on the mesh network. Figure 4. ZenWiFi AX (XT8) mesh network system (Source ASUS) The connectivity among the satellite nodes relies on self-healing algorithms (e.g. Shortest Path Bridging) which provides awareness of when a connection is broken. The nodes can re-route signals and allows the user to discover other nodes to connect to. A typical setup (e.g. Asus ZenWiFi AX) would consist of just 2 satellite nodes. It then requires connecting one of the nodes (i.e. router) to the modem for access to the Internet. The other node is then configured to communicate with the router and users can access the Internet. In terms of security, a mesh network system can provide features to secure the network and communications. With Wi-Fi 6, enhanced security can further be provided. Vendors often target the market for smart homes and offices, so this is an important consideration. These include devices like intelligent thermostats, smart refrigerators and a vast line of IoT (Internet-of-Things) devices. These can be vulnerable when connected to a network, so mesh network systems must provide the best security to prevent cyberattacks. Synopsis Mesh network solutions boost and extend the signal, using a distributed and decentralized network of satellite nodes. The entire system acts as one network, not individual access points that require a separate passphrase. The user can connect to any node to get access to the Internet with a strong Wi-Fi signal. In terms of setup and configuration, mesh networks have become easier to install with smartphone app based management. When it comes to speed and performance, it will be up to the protocol (e.g. 802.11ac, Wi-Fi 6) and number of bands (frequency channel) that the system supports. Since access to the Internet is an essential requirement in daily life, it becomes necessary to provide that service. While it is easy to connect to Wi-Fi access in your typical small office and apartment, the challenge becomes greater when you have a larger space (e.g. multi-floor units, large offices). These include business centers, shared workspaces and mansion estates. Wi-Fi signals can get lost and degrade with distance. A mesh network environment provides consistent signal coverage that handles larger areas for Wi-Fi access.
https://medium.com/0xmachina/mesh-networks-improving-wi-fi-access-and-connectivity-316bf9c70c3c
['Vincent Tabora']
2021-09-07 06:57:15.416000+00:00
['Wifi', 'Internet', 'Network', 'Technology', 'Wireless']
840
A Shocking 1960’s Advertisement…
Sometimes there really are no words. As I do research for other things, publications in a newspaper move me, and I had to create a publication just to keep them separate to the rest of my work. What the F… [Fudge] Not sure what they are getting at here, but its a bit ‘ew gross’ category. Shock value for sure. Our society has really come a long way. Like this story? Want to see others like it? Check out more in Historical News or Internet Archaeology (True Crime). You can catch technology/cyber security influenced articles in Infoseconds. Sources/References, Bibligraphy
https://medium.com/historicalnews/man-machine-systems-a-shocking-1960s-advertisement-5c37ba672247
[]
2020-11-23 14:03:28.092000+00:00
['Technology', 'Advertising', 'History', 'Robotics', 'Funny']
841
Product Management History: The Nineties, The Noughties, and Beyond
At the close of 2020, when we’re on the brink of entering a new decade, we thought it’d be a good time for a little history lesson. With more people entering the Product landscape than ever, now is a good time to help the newbies understand where Product Management came from, and how far the industry has come. Where It All Began Not every discipline can point to a single person as its father/mother. Luckily for history-enthusiastic PMs, they can! Neil McElroy from Procter & Gamble (the man who also helped found NASA by the way) is often pegged as the man behind modern Product Management after he wrote a now-famous 3-page company memo on the principles of brand management in the 1930s. The memo describes the role of the ‘Brand Man’ who would be responsible for managing the product, tracking sales, and general marketing and promotion of the product. The memo revolutionized the way Proctor & Gamble operated, helping it to become brand-centric, and therefore leading to the start of Product Management as we know it today. In part, according to Rafayel Mkrtchyan in ‘History and Evolution of Product Management’, because of McElroy’s influence over young entrepreneurs Bill Hewlett and David Packard. When Hewlett-Packard ran with his ideas, it introduced an organizational structure where each product group functioned as a separate organization. This put more focus on both the customers and the products themselves. Sound familiar? Product Management in the 90s The 90s in tech. What an era. The way business was done changed drastically in the 90s, and not just because of the rapidly-improving technology. Corporations started re-structuring so that teams were self-managing, and were given more autonomy and ownership. This is when companies started applying consumer PM principles to software PM. At the time, Microsoft had Program Managers who were essentially engineers. There was a gap between development and tech which needed to be filled. There was no-one in the middle to ‘ translate’. Product Management developed organically, as the intersection between engineering and brand management. Program Managers evolved into Product Managers (though not universally, as we still have Program Managers today). It was the 90s that produced several lightweight software development methods. Popular methods were heavyweight, and led to bottlenecks and micro-management. The new methodologies were designed to allow for better time management, and allow for more creativity. Most notable perhaps is Scrum, which was popularized in 1995. Tech Highlights of the 90s 🌍 1990: Tim Berners-Lee first tests the software that would become The World Wide Web 🎮 1993: The PlayStation is released, launching a new era of home console gaming that’s still going strong today 🛒 1994: eCommerce sites (Amazon, eBay) become more and more popular 👨‍💻 1995: Microsoft launches Windows 95, and Internet Explorer makes the internet more user-friendly 📼 1997: Netflix is launched in its original version, mailing videos to customers in their own homes 🎧 1998: The first portable MP3 player is released 📧 Email takes off, changing the way people and businesses communicate 🎉 Y2K is expected to bring the new online world crashing to a halt (spoiler alert: it didn’t!) Product Management in the 2000's It was in the 2000s that the product world started to open up to more people who didn’t necessarily come from technical backgrounds. As Product Management secured its place as the intersection between design, technology, and business, having Product Managers from diverse backgrounds was finally recognised as a strength and not as a weakness. After all, many paths lead to product! 2002 saw the start of the APM program, which Marissa Mayer introduced at Google. It was the first program, soon to be adopted by tech companies around the world, which had the sole purpose of training the next generation of Product Leaders, by introducing them to the company culture and exposing them to current product talents. In the early 2000s we also got The Agile Manifesto, which was set to replace waterfall as the defacto method for building software products. It’s hard to say where Product Management would be today without agile, as it gave us so many of the tools which we use to build great digital products. Without agile, there would be no MVPs, no product-led growth, and the overall landscape of the tech industry would look very different. You might also be interested in: 4 Ways to Better Learn About Agile as a Product Manager Tech highlights of the 2000s 👾 2004: The video game industry’s profits officially overtook those of the movie industry 🗣 People started using the word Google as a verb 👋 Peer-to-peer technology took off, leading to big debates on the ethics of filesharing 🤓 Smartboards were more widely adopted and installed in schools, and eReader sales became as important to the publishing industry as paperback/hardback sales 📢 The Agile Manifesto was launched Product Management in the 2010s By 2010, the Product Management community was going from strength to strength, and expanding all over the world. The global product community started getting the recognition it deserved. Going beyond a job, Product Management became a craft. Product School was founded in 2014, from a small coworking space in San Francisco. We started offering Product Management training to individuals, and over the last 6 years we’ve grown to a Product community of over one million people! This is also the decade where product leaders realized that they could give back to the Product Management community, volunteering their time to give talks, write books, appear on podcasts, attend conferences, and mentor the next generation of product leaders. This decade also gave us some landmark content for the community, such as Dan Olsen’s The Lean Product Playbook, and Marty Cagan’s Inspired. Of course, the end of the decade brought…2020. A year which will go down in history, and perhaps also infamy. In 2020 a lot of things happened very fast, whilst simultaneously bringing the world to a standstill. For Product Management, everything moved online in many parts of the world. Remote PM had been a topic of conversation swirling around the industry for some time, and Stay at Home orders really cranked that up a notch. Sectors like online education and online collaboration tools blew up, and eCommerce website (which were already very popular) went from strength to strength. Tech highlights of the 2010s 💻 2012: Google Chrome overtakes Internet Explorer as the most used web browser 🚀 2012: SpaceX’s Dragon became the first private commercial spacecraft to reach the International Space Station 🚲 In countries around the world, shared mobility grew exponentially, with care, bike, and scooter-sharing becoming popular in almost every major Western city 👩‍💼 Virtual assistants became more widely available, including Apple’s Siri, Google Assistant, and Amazon’s Alexa 🖨 The 3D printing industry gained $7 billion in sales 📺 Streaming sites rose in popularity, eclipsing traditional television and forcing Blockbuster to close 🌏 2018: The number of global internet users surpassed half the population of the world. In 2011 about 2 billion people used the internet, which more than doubles to over 4 billion in 2018 What Does the Future Hold? Technology has always been an exciting space to work in for those who are passionate about it, but perhaps it’s safe to say that there has never been a better time to build digital products. Companies will of course have to find the new balance between remote and in-office work. Some have already pledged a complete shift to remote-first, and it’s something that many new startups will consider as a cost-cutting effort. To find out what we (and a whole host of product leaders) think the future holds, check out our report on The Future of Product Management.
https://productcoalition.com/product-management-history-the-nineties-the-noughties-and-beyond-f00ca98975d4
['Carlos G De Villaumbrosia']
2020-12-29 14:58:21.215000+00:00
['Product Management', 'Tech Industry', 'Technology', 'Product Manager', 'Silicon Valley']
842
UI Regression Testing
Baseline image on the left and incorrect background color of the same page on the right UI testing is hard. So hard that most companies have a suite of QA teams devoted to testing new versions of their site. Yes we have unit tests, yes we have integration tests; but while these provide great assurances on the functionality of our code, they fail to ensure that a user will see what they expect. When components move around or colors change, users can become frustrated when presented with an unfamiliar interface. UI Regression testing compares a screenshot of a given page to a baseline of what it should be, and outputs a diff image when applicable. When a difference is found, you have many options on how to proceed. You can set up your CI/CD pipeline so that the build fails, trigger a pager duty alert, or send a slack message to the team responsible. In this post, I’ll be going over the tooling and setup of how we built our solution, and show how you can do it too! Tooling Here’s what we used: Docker / docker-compose Selenium WebdriverIO AWS S3 Blink-Diff Docker provides a sandboxed environment and makes it easy to deploy this to our CI/CD pipeline. WebdriverIO allows us to control the headless browsers managed by Selenium, S3 stores our files (baseline images, diff outputs etc), and Blink-Diff is our image comparison tool (note that it’s an npm package). The Process Download baseline images from S3 Take screenshots of the local version of the site Test for any differences between the screenshot and baseline Upload any generated diff images Cleanup and alerting Downloading the baseline images from S3 is simple. Whatever language you’re using, AWS likely has an API for it. Similarly, dockerizing your website should be fairly easy, though if you’re running a legacy site, it could be more involved. I won’t discuss how to do this since there are many articles and tutorials on this subject, and it’s very much platform specific based on your stack. Docker We’ll be using docker-compose with a couple pre-built images all set up to run Chrome and Firefox in Selenium. Here’s the start of the docker-compose file that sets up the three services (one for each browser, and one that controls them). We’ll be adding to this file as we go on. We set up our selenium hub and two browsers to run as their own service. The browsers connect to the hub on port 4444, which we then also expose in the hub service so that we can see what’s going on (this would be done by visiting localhost:4444 ). Adding your Site to docker-compose.yml You’ll need to add your website as another service. This requires adding something like the below to your docker-compose.yml . We define a new service called app , and tell it to look in the folder ./webapp for its Dockerfile . The Dockerfile for the service should copy over its files, install dependencies and start up its server. app: build: ./app # build using the Dockerfile in the `app` folder ports: - "3000:3000" # expose port 3000 (The port the app runs on) logging: driver: none # hide logs I expose port 3000 here so that when the service is running I can visit the site and make sure I’m able to hit it. Once this is verified you can remove this if you want. Adding WebdriverIO Similarly, you’ll need to have another service for the actual testing process. This will involve running WebdriverIO, downloading and uploading files to and from S3, diff generation, as well as any other business logic specific to your needs. To get started, you’ll want to add something like the following to your docker-compose.yml . testing: build: ./testing depends_on: - selenium-hub logging: driver: none # turns of logs in your terminal In here we specify that this service depends on selenium-hub so that selenium starts up before the testing service. You may need or want more control over only starting one service before another then this provides. Check out this docker documentation on how to do that. Setting up WebdriverIO WebdriverIO is pretty cool, but can be difficult to understand when first getting started. We’ll be using wdio which is the test runner built into WebdriverIO, but it’s difficult to see where one ends and the other begins. To set it up run the following: npm i -g @wdio/cli wdio config You’ll then be asked a bunch of questions to set up your env. Below is an image of the questions you can expect to be asked. WDIO Config Setup This will generate a wdio.conf.js file with all your settings. The file has a ton of comments explaining each part of the config. If you’re having problems in getting the test runner to work, this file is likely the culprit. I’ve pasted below a version that I know works and that I have used myself. I’d spend some time going through the file that you create through the CLI tool and comparing it to the one below. Working wdio.conf.js You’ll likely have to play around with some of the settings. For example, the timeout values may cause a problem and need to be increased, or the path to your test files might be different. When going through the setup process, wdio will offer to install any additional packages that you need based on your answers. This includes the report package and the testing framework among others. The last step I’ll cover is using WebdriverIO to actually run your tests. Take a look at the following code sample to see how I set this up. I define a few helper functions ,and then simply loop through all the screen sizes and routes that I want to test. Next, I define a list of screenshots in a separate JSON file which allows me to easily run a diff comparison for any number of screen sizes. This lets me test what the page looks like on an iPhone SE vs an iPhone X vs a 13 inch computer etc. Similarly I pull the list of routes to test from an external location. You can set this up any way you want based on your architecture and requirements. A few notes Based on the browser, I use a different method to resize the browser The URL of the site is https://app:3000/${route} . app is the name of the docker service running the website. I specify the port as well and use the route passed in as a parameter . is the name of the docker service running the website. I specify the port as well and use the route passed in as a parameter I wait 1 second before taking the screenshot. There are likely better ways than just waiting a little to know when the page is loaded. You can test for an element on the page or perhaps maybe for an event to have been fired. All the Other Bits A few more scripts are needed for this to work. You’ll need to run the image comparisons, upload and download files from S3, and perform any other tasks that you want to occur for any given situation. These aren’t difficult to write, and are fairly trivial, so I’ll leave their design up to you. The Final Piece The last question in all this is: what happens when a build fails this test? This is more of a process issue than a technical question. Is the diff found intentional? If so, then you need a way to update your baseline image to the new image. This requires another part, a diff reviewer that allows people — application developers, people on your QA team, product owners or anyone else who needs access — to accept or reject the changes found. Accepting the changes would require updating the baseline image to the new screenshot image, while a rejection would have to notify the dev team responsible for the changes. The Output Below is an example output that is produced by the diff tool. On the left is the baseline image, the middle is baseline overlaid on top of the screenshot, and on the right is the screenshot. The baseline has the drawing on it, since it was easier for me to add a quick edit to the baseline rather then edit the contents of the page. Example diff output One cool way to expand on this would be by integrating an AI to determine when a change is valid. You would either need to train your own model on good and bad UIs, or find a pre-trained model and then integrate it into the process and use that rather than blink-diff . While this is certainly a cool advancement to make, it’s by no means a requirement; and doing simple image comparisons will take you far. Potential Issues I’ll end with a few open questions and potential issues that you may encounter. You’ll need to have some business logic to determine if baseline images exist. If not, then it means it’s the first run and the screenshots should be named, labeled, or stored in a way as to make them the baseline images for subsequent runs. Timeouts can cause trouble. It’s possible you’ll have to increase the timeouts in the wdio.conf.js file to prevent the runner from exiting prematurely. While this is a complex problem, and this solution doesn’t solve everything, I hope it’s given you some insight into how you can build your own automated UI regression testing suite. Since all of this runs in docker it’s very easy to drop into your CI/CD pipeline whether it be Jenkins, Travis, AWS or any other provider. Happy testing!
https://medium.com/disney-streaming/ui-regression-testing-71b2ef1bd9b4
['Samuel Bernheim']
2019-03-29 21:08:01.908000+00:00
['Webdriverio', 'Technology', 'Regression Testing', 'Selenium', 'Snapshot Testing']
843
So, We Finally Got Our Teenage Daughter a Phone
So, We Finally Got Our Teenage Daughter a Phone by Tommy Paley Photo by Rob Hampson on Unsplash “I’m the only one in my class without a phone.” It was near the end of grade 7 for Charlotte, our 13 year old. It was yet another crazy busy day of me frantically racing away from my job to get her from school and barely making it to dance class on time when she uttered this statement, both casual and attacking at the same time. I’m such a proud father at moments like this. While she wasn’t directly putting pressure, she was also issuing a challenge — are you really going to be that parent? The parent who consciously opts to ostracize their own flesh and blood— for those wondering, I don’t mean that statement literally, as the clean up alone would take days. “Well…” I started boldly before slipping into silence hoping she’d be distracted by the inanity on the afternoon radio show or a nearby squirrel. As I made the final turns as we sped towards dance, I was searching for what path to take, trying to desperately avoid getting into an accident while also attempting to employ such profound and impossible-to-argue logic that she would drop the whole thing and resume her previous fascination with relatively harmless, inexpensive and non-addictive hobbies like dot-to-dot puzzles, friendship bracelets and witchcraft. I wasn’t entirely sure, exactly, what I felt. Parenting, in my experience, is full of moments like this. Confronted with a situation or a new reality — a fork in the road, if you must, though if you must, you should eventually look into this constant desire to place forks in roads, just saying. Forced to take action or make a statement that could irrevocably alter your family’s existence. Like my insisting that the whole family should wear matching 70’s era Adidas jumpsuits in public. You may not always realize it, as parents, but each decision you make could be starting to blaze a trail, that once entered, there is no going back. Other similar past challenges involved limiting junk food, reading during dinner and going into the nearby forest to blaze trails with our brand new set of torches. This wasn’t my first rodeo — no, that was back in 1996. But that’s a story for a different day. “You aren’t getting a phone before grade 8,” I finally said, repeating an often-used excuse that typically ended conversations in the past before they even got legs. I find when conversations too often have legs it’s usually a sign to cut way back on eating the next-door neighbour’s flowers. This time was different — she was older, wiser and had not only prepared questions, but had well-researched arguments, buttered-up complements (my love of both eating butter and buttering random objects up is well known by all) and, if it came to them, both ad hominem attacks and requests to treat me as a hostile witness. “But why?” she pleaded, with me instantly regretting our decision to raise her by encouraging her to question things. See, she had just returned from the grade 7 camp where, in the evenings and during downtime, devices were soon in everyone’s hands, as posts and likes and streaks and music and games took over. Based on her acrylic-on-canvas impressionist-style painting commemorating the moment that some local galleries have shown minimal interest in, it was one big group enjoying their technology, except our daughter, the outcast, the pariah. And, no, if you were wondering, we are not raising her Amish, though, if she wanted to construct a fully-functioning well or barn in the backyard, I wouldn’t complain. It’s so hard to be the one parent doing something differently, no matter how strong your moral ground — “no, you can’t have a bag of sugar and red dye #3 for a school snack”, “I’m sorry, you are only 6, you can’t have a nipple ring”, “yes, you have to wear spandex to school just like I did when I was a young boy”. As soon as your kid gains awareness that everyone else is allowed to do things you’ve been preventing or denying or maliciously saying no to for years, it’s all downhill from there unless you live in a valley and are just too exhausted to hike up to the top of one of the nearby hills solely for metaphorical purposes. Thankfully, summer came, full of beaches and parks and playing tennis and the focus on having a phone disappeared from view sort of like when I close my eyes when it’s bedtime or when I hold a piece of corrugated cardboard in front of an annoying work colleague when they are trying to talk to me. From time to time, my wife and I still tossed around the idea of getting her a phone while also tossing around a Frisbee or, once, an actual hot potato, because, we’d made one too many for our family — it’s hard to count to four sometimes — and I was raised never to let a hot potato go to waste. The nagging issue was my increasing amount of grey hair with a smaller, secondary issue that, for the first time, she’d be taking public transit on her own in the fall. So, the week before school started, we decided to practice because I’d heard once, from a reliable source, that practice makes perfect. I’d also heard, from a slightly less reliable source, that one should practice what they preach, but our schedules were just too busy for me to dedicate the appropriate time for preaching. In just over a week, Charlotte would be having to go to school and activities and, occasionally, to an undisclosed location where the pirate treasure was buried via bus and train. I couldn’t imagine my baby girl navigating public transit on her own. I also couldn’t imagine lots of things thanks to significant time being scolded for imagining too much on company time in my 20s. For the first practice my younger daughter and I accompanied her as I led her through the steps and the directions, pointing out spots of cultural interest as well as some fairly nondescript crows. The second time, we, again, accompanied her, but sat apart from her, almost like we didn’t know her or, we knew her, but she had on a near-lethal amount of cheap perfume. As we sat back and relaxed and kept our distance and our feet elevated to reduce inflammation, she took the lead. Finally, on our third trip, she went completely on her own as my younger daughter and I hung out at a park and threw discs — in our family, we often fill the time by throwing discs, doing puzzles and mocking oligarchs. It was 2 pm, the time she was supposed to return. No sign of her anywhere. Something made even more apparent by the ample signage in the area. Time continued to tick by as stress was building. “Where is she?” I bellowed followed by “How is my bellowing, it’s my first time?” I wished there was a way to call her or, failing that, to send a series of hilarious, yet stern, stick figure drawings describing my unhappiness with her tardiness sent via post. But, she had no phone, because someone — for argument’s sake, let’s say it was solely my wife — thought she wasn’t ready. She could be lost or crying or running away to join a cult or even seeding a private organic garden without me, fully knowing my love of gardening. And then it hit me, she had to have a phone. The flying disc also hit me as I was deep in thought. Finally, I was also hit by a falling leaf which, in context, was actually quite poetic. There was no denying it this time, unlike how I continued to deny that humans descended from chimpanzees and that eating tons of raw garlic may have some connection with why my family often dons hazmat suits when I’m around. It was clear, if she was going to be out in the world — often after sunset and, on an unrelated note, often wearing a wide variety of cute shirts— we had to have some way for her to contact us. That evening, after a brief consultation with my wife that future family historians will describe as “fairly nondescript” and “reverential if we are being kind”, I took my daughter for a drive. Thankfully, she had just finished relentlessly teasing her sister, so she had time. Instantly, she started peppering me with questions — a welcome break from her peppering me with finely ground pepper or sprinkling me with coarse kosher salt — “why are we here”, “how long will this be?”, “if we see one of my friends, do you mind pretending to be a potted tropical plant?”. Then we arrived outside the phone store and her jaw almost literally dropped — investing in extra chin straps for preventative measures really paid off! “Am I actually getting a phone?” she asked with a cautious excitement that many would reserve for receiving a stay of execution or realizing that a test result of ‘negative’ is a good thing. With a slow look to the horizon followed by a dramatic nod of my head — thank you expensive acting classes! — she released a loud shriek that, while temporarily causing mild-to-moderate hearing loss in my left ear, made me feel an interesting combination of happiness, trepidation and hunger all at the same time. For those wondering, I hadn’t had eaten since noon. That first day with her phone will go down in the annals of history as a fairly normal day. She was busy setting up email and Instagram and Snapchat. Totally occupied downloading music and installing games and playfully taunting her sister “Where’s your phone? Oh, right, you don’t have one”. With one eye on the screen and her fingers frantically taping away, she promised her mostly-helpless parents to ask before spending our money to purchase apps and to not exceed her data despite how hard it will be and to not forget who her family members were. She seemingly went from a girl with a few friends to someone with hundreds of requests and likes and followers in minutes. Now, it must be explained that I’ve worked in high school for years. I am not your out-of-touch dopey dad badly in need of a wardrobe update — though, once again, thanks to the acting classes, I can pull that off after some rehearsal time. As a school counsellor, I’ve been around the block — precisely three times and it would have been four if my calf wasn’t so tight — plus my wife, is a teacher, too. We were as equipped as two parents could be — aside from the couple across the street who are an annoying shining beacon of hope for us all. As our daughter entered the handheld technology phase of her life — it’s enough to make a grown man weep — we were prepared for the traps, the warning signs, the need for healthy boundaries and the importance of keeping fresh fruit in the fridge for snacking. With a sunny optimism that others have referred to as “refreshing” and “puppy-like” and “cute if you were 6”— I believed our daughter could be one of the few teenage girls to learn to use her phone responsibly with next to no challenges. For reference, I also once believed that pigs could fly, but, in my defense, I was always confusing pigs with birds. I was also dangerously nearsighted and afraid of my own shadow. She got off to a good start. She was using her phone for directions, always texting us when leaving school or coming home and occasionally posting on Instagram — she has three accounts: her regular one, one for her gerbils and one with pictures of food she has cooked (I’ve claimed she could combine the last two — not a huge fan of rodents, no matter how cute — but she didn’t find that at all funny). We had chats about being safe, not making your account public and not following anyone you don’t actually know and she not only nodded her head (mostly at the appropriate times), but also gave us a rousing standing ovation when we were done. It was all going swimmingly. Even after school was in full swing and dance classes resumed, and her new social connections expanded exponentially, she was very good at putting her phone away during class or keeping notifications off and my wife and I were spending a lot of time patting each other on the backs — sort of like a mild massage really — for another parenting job done well. I suggested that it was possible that we were placed here, on Earth, solely to as an example for all parents. But then, almost overnight (technically it was during the day), things slowly shifted. “I have 900 followers on Instagram!” she boasted instantly becoming the most popular thing in our house since sliced bread. (Don’t worry, sliced bread, you had your day). She joined Snapchat and got a Tiktok account and all of a sudden, all bets were off. Our gentle — and grammatically sound — reminders to “focus on your work” or “don’t forget about your chores” or “it’s bedtime” turned into more frustrated “GET OFF YOUR PHONE” and “I DON’T CARE IF YOUR FRIENDS HAVE NO LIMITS” and “I HAVE TO STOP RAISING MY VOICE BECAUSE ITS GETTING A LITTLE HOARSE AND, YES, I FULLY RECOGNIZE THAT GETTING A LITTLE HORSE IS FUNNY, BUT NOW JUST ISN’T THE TIME FOR PUNS!” Every single time we turned around she was on her phone, sneaking a look at social media or laughing at a new filter or sharing a new dance move. Each time we left her alone, she was racing upstairs to start a video chat or a live stream or a group convo. Despite pleas to “not spend all afternoon looking at a screen”, she often did, unless we pried the phone from her cold — it is winter in Canada after all — quite alive hands. If the room got a little too quiet, there was no wondering what she was up to. We wanted to so badly reel her or, if not her, something — like salmon or maybe some snapper or a nice piece of cod — in, but, on the other hand, she’d never seemed happier or more socially connected. “Don’t you want me to have friends?” she’d ask, knowing that we did. I wanted her to have the active social life that I could live vicariously through. But we were also so aware of where this could go — I had firsthand experience (probably closer to between third and seventh hand, if I’m aiming for accuracy) with students whose phone addiction led to school failure and social anxiety. I knew girls who got into some really mature situations via their phones from constant requests for nudes to offers to buy drugs or go vape or even propositions for sex (“are you dtf?” one 14 year-old girl received at lunch, one day while sitting in my counselling office from a person she barely knew). I knew countless parents who were tired of the fighting with their kids for control and had just let the rules go. I also knew parents who confiscated and grounded and discontinued phone plans. I also knew one set of parents who identified as human equivalents of grilled cheese sandwiches, but that isn’t important right now. Maybe we were naive or maybe we just played up the naivety for effect, but we believe we could find the fine line between the two extremes. It is a very fine line we agreed, before drawing an even finer line after investing in some top-notch pencils. After a brief brainstorming session on what to draw after we got bored of drawing straight lines, we continued to talk about Charlotte and her phone. On one hand we wanted her to have fun and be happy, while on the other hand we wanted her to be responsible. If she was able to have three hands, the options would be so exciting. After a short break for refreshments and toast points, I spoke eloquently about wanting to live in a world where not only was appropriate use of technology possible, but also where men of 48 with glasses and a thinning hairline were considered attractive and offered book deals based solely on short, funny, mostly-unread blog posts. It’s so different, I reminded my wife, after reminding myself in the mirror first, for kids these days. When we were kids, back in the old days, we spent all night talking on the house phones until our ears were sore and red. Some of us just had sore and red ears to start with and it would nice if you wouldn’t stare. These days very few kids actually talk on the phone with each other and with busy after school lives, there isn’t always time to hang out in person. Socializing and a significant amount of interaction for teens often takes the form of commenting on posts, creating memes, and what is “hip” and “now” and “fresh” (I’m fully aware that none of those words are actually hip or now or fresh) while being online with friends. And we heard our daughter loud and clear thanks to her proficiency in vocal cord usage and the thinness of our walls that she just wanted to “be a regular kid” and that she was “doing well in school” and was also confused about the need for air quotes, but that was excusable as she is only 13 and was upset at the time. But, as much as I wanted to deny it, she was right. Things could be a whole lot worse. She was loving school, seemed really happy and, despite wanting more and more and more phone time each day and trying hard to push the time when she plugged in her phone at night (thus ending her conversations for the evening) later and later and later, she also understood our concerns. She spoke to us about grade 8 friends who were up, online, till 3 am and who are already falling behind in school. She spoke to us about the pull towards the phone and the almost overwhelming desire to be on it all the time, but that she wanted to have limits. She has actually asked for us to take her phone away, at times, so she could focus on homework. Not that we don’t have phone use-related arguments from time to time — nearly everyday, to be exact — but I sense that she wants to do the right thing and learn to moderate herself. There is no going back — she has a phone and is on the cusp of 14. It will interesting to see what the future holds. She no longer is the only kid without a phone, but I believe — I really want to — that it will all work out.
https://medium.com/the-junction/so-we-finally-got-our-teenage-daughter-a-phone-cb9122966184
['Tommy Paley']
2020-04-14 18:50:06.928000+00:00
['Technology', 'Nonfiction', 'Humor', 'Creative Non Fiction', 'Parenting']
844
Fitbit vs. WHOOP: Finding Your Wearable of Choice
Photo by Andres Urena on Unsplash G lobal Fitness Trackers market size was estimated to be $30.41bn in 2019 and is expected to reach $91.98bn by 2027, which exhibits growth at a CAGR of 15.2%. All around the world, companies are seeking a stake in the fitness wearables industry — a sector flooding with all forms of players — from established technology corporations to fitness-focused startups. However, they seem to be onto something… According to an August 2020 report published by Fortune Business Insights¹, the Global Fitness Trackers market size was estimated to be $30.41bn in 2019 and is expected to reach $91.98bn by 2027, which exhibits growth at a compounded annual growth rate (CAGR) of 15.2% over the forecast period (2020–2027). The expected growth of this market provides an attractive opportunity for both product development and financial investment in the near future. Not only are the number of available products growing rapidly, but the function of these products will continue to vary, emphasizing different niche health goals — sleep, movement, glucose monitoring, heart rate. As a fitness and health enthusiast, I am always interested in the evolution of these products. Recently, I switched from Fitbit Inspire to WHOOP as my “fitness wearable of choice,” in an attempt to find a product that better aligns with my personal wellness journey. For readers wondering which of these products they best align with, here are some tips based on my experience: When should you get a Fitbit? You are less connected to your phone Fitbit watch faces contain a clock and summary of your performance (such as steps taken, calories burned, and minutes exercised). For those who like to take breaks from picking up their phone, they can still track this meaningful data right from their wrist. WHOOP does not contain a watch face, which requires checking the app from time to time, in order to monitor your daily metrics. Your job requires more movement Step counting can be a fun and competitive way to keep yourself active, however, it can be a bit frustrating for users with a sedentary job or lifestyle. A good Peloton ride can be just as beneficial as an outdoor walk, but it may leave you feeling like you “let down” your wearable friend. For users who can reap the benefits of step counting throughout the day, through a more active job or lifestyle (think nurses or teachers), Fitbit provides a great source of motivation to get to your daily step goal. WHOOP does not contain a step count feature, which for sedentary workers may provide a source of relief, knowing there are other ways to encourage physiological health benefits. Simplicity is important Whether it be your first or tenth trial fitness wearable, Fitbit seems to attract and benefit customers that value simplicity in their fitness routines. Skip the bells and whistles — this product is easy to use, with a simple design, and collects necessary data to inform its users of relevant health outcomes. When should you get a WHOOP? You have trouble relaxing WHOOP’s data collection recognizes “daily strain” as a key performance indicator. While it can be motivating to continue to maximize your daily strain, WHOOP users are also encouraged to obtain a certain level of recovery, either from a long sleep at night or less strain the following day. This will keep the gym-rats in check and remind them that recovery is just as important as performance, in order to sustain effective workouts. You are interested in sleep performance WHOOP collects a ton of data while you are sleeping to analyze your sleep performance and provide feedback on sleep quality. Some features include Time in Bed, Disturbances, Efficiency, Respiratory Rate, and Sleep Latency. This will help users identify which factors contribute to more a restful, deep sleep, and which factors may not (sugar and alcohol for me!) Fitbit does provide “sleep scores” for users, but I find that the tracking is relatively inconsistent, and the amount of data does not compare to the analysis mentioned above. You participate in a variety of fitness activities WHOOP allows users to track a monstrously wide variety of activities including things Australian football, basketball, climbing, coaching, commuting, gaming, manual labor, meditation, wrestling. No daily activity will be left unaccounted for with the WHOOP, and each activity will be analyzed to track its contribution to your daily activity strain.
https://medium.com/in-fitness-and-in-health/fitbit-vs-whoop-finding-your-wearable-of-choice-e7a4f683d8b8
['Tara Hally']
2020-12-31 13:32:40.502000+00:00
['Consumer', 'Fitness', 'Technology', 'Wellness', 'Workout']
845
2020 in review: 8 facts about women in tech, politics, and diplomacy
7. European Parliament awarded the 2020 Sakharov Prize for Freedom of Thought to Sviatlana Tsikhanouskaya and Veranika Tsapkala “on behalf of the democratic opposition in Belarus, represented by the Coordination Council, an initiative of brave women and political and civil society figures.” “The whole world is aware of what is happening in your country,” said EU Parliament President David Sassoli. “We see your courage. We see the courage of women. We see your suffering. We see the unspeakable abuses. We see the violence. Your aspiration and determination to live in a democratic country inspires us.” Accepting the prize, the main opposition candidate Tsikhanouskaya said: “Each and every Belarusian who takes part in the peaceful protest against violence and lawlessnes is a hero. Each of them is an example of courage, compassion, and dignity. The EU Parliament listed the many brave women part of the democratic opposition in Belarus and the country’s Coordination Council: main opposition candidate Sviatlana Tsikhanouskaya, Nobel Laureate Svetlana Alexievich, musician and political activist Maryia Kalesnikava, and political activists Volha Kavalkova and Veranika Tsapkala, as well as political and civil society figures like video blogger and political prisoner Siarhei Tsikhanouski, Ales Bialiatski, founder of the Belarusian human rights organization Viasna, Siarhei Dyleuski, Stsiapan Putsila, founder of the Telegram Messenger channel NEXTA, and Mikola Statkevich, political prisoner and presidential candidate in the 2010 election.
https://medium.com/digital-diplomacy/2020-in-review-8-facts-about-women-in-tech-politics-and-diplomacy-3672c2be1e48
['Andreas Sandre']
2020-12-18 20:37:12.652000+00:00
['Technology', 'Government', 'Tech', 'Women In Tech', 'Women']
846
The Case for Using Covid-19 Exposure Notification Apps
The Case for Using Covid-19 Exposure Notification Apps It isn’t too late for them to make a difference Photo by Pocky Lee on Unsplash Last week, I got a text from the New York State Department of Health inviting me to use the state’s contact tracing app. It was the first time I’d received an invitation, and my first thought was: After eight months of Covid-19, you’re asking me to use it now? Today, the case count in the U.S. reached 16.9 million, and over 307,000 Americans have died. Transmission is rampant in a majority of states. If an app notified me every time I had a close brush with someone who tested positive for Covid-19, would it even make a difference in helping stop the spread? I asked Michael Reid, MD, MPH, who’s heading up the contact tracing programs for both San Francisco and California and is an assistant professor at the University of California, San Francisco specializing in infectious disease. The short answer is that it’s not known for sure whether these apps help reduce transmission in the U.S.: Not enough people have adopted them, so there’s not enough data. The long answer, though, suggests they may still play an important role in reducing transmission of Covid-19, especially once the country is ready to fully emerge from shutdown. Reid began his explanation by clarifying that these apps are for “exposure notification,” not for contact tracing per se. “They function to complement existing contact tracing capabilities,” he says. “That’s a useful distinction to make so that one understands that they’re not replacing the need for human contact tracing.” Human contact tracers — which the U.S. is woefully lacking, notes Reid — identify people who test positive for Covid-19 and interview those people to find out who they’ve had close contact with. Those people are then notified about their exposure and given instructions for self-quarantining. This is a tried and true public health method for reducing transmission of infectious disease, and it’s proven to be a powerful tool for controlling Covid-19 in countries like South Korea, Vietnam, Japan, and Taiwan. But the contact tracing process has a gap that apps may help fill. When a person who tests positive for Covid-19 is interviewed by a contact tracer, they identify people who they already know, like household members and close friends and family. They can’t, however, identify people they interacted with who they don’t know, like the salesperson at the grocery, or a teller at the bank. While working as a contact tracer, says Reid, he learned that most people don’t know who they acquired Covid-19 from, and a substantial number of cases also elicit very few contacts. Exposure notification apps can alert people if they’ve been exposed to a stranger with Covid-19 and give them “the agency to take matters into their own hands,” he says, by self-quarantining and contacting their local public health department, which can use that information to track outbreaks. These Bluetooth-based apps allow phones to communicate with other phones that they come into contact with; if one user has logged a positive test result, the other will be notified. For this reason, explains Reid, apps could actually have even more utility in a situation like the one we’re in today, where there is more widespread transmission of the coronavirus — assuming enough people use them. “If we’re ever going to go back to work or be on school campuses, or on factory floors, then these kinds of tools could be a real asset to be able to determine who you’ve come into close contact with who you might not otherwise known.” The big caveat, he notes, is that the most important public health interventions are still social distancing and mask-wearing. Whether enough people will use exposure notification apps remains to be seen. A recent Reuters analysis estimated that about 6 million Americans had used the apps by mid-November and that nearly 50% of the U.S. population would have access to one of these apps by Christmas. In April, a modeling study from the University of Oxford showed that 60% of the population needs to adopt these apps to end transmission. There are numerous reasons why people may not have downloaded the apps (Nature explains them in-depth here). Privacy is one common concern; in a OneZero story published in April, my colleague Will Oremus questioned whether these “opt-in” public health apps would be treated as such by private entities like churches and schools, and another colleague, Sarah Emerson, raised concerns that marginalized groups would bear the consequences of widespread surveillance. Reid didn’t seem too worried about the privacy issue, though. “Ironically, I think that’s a really peculiar conversation to be having, given that Google and Apple and other technology companies are scraping your data all of the time for information that they’re going to use to target you for new products that you’re going to buy,” he says. “The mechanics of exposure notification technology is such that [the data] is not being centrally housed in some department of public health data warehouse.” After talking to Reid, two things became clear to me: First, the country needs more human contact tracers and can’t expect apps to make much of a dent in transmission on their own. Second, I should probably just download the app. At the very least, there appears to be minimal cost and risk to me, and doing so would add another layer of protection to myself and the people in my household, plus support the work of the human contact tracers as they scramble to identify where the virus is headed next. And, looking ahead, it may very well be a tool I’ll encounter again in the future. If exposure notification apps are shown to be effective, “I think they’re going to play an important role when the next pandemic comes around,” says Reid. “And chances are it’ll come around sooner than the last one.”
https://coronavirus.medium.com/the-case-for-using-covid-19-exposure-notification-apps-813a9f799986
['Yasmin Tayag']
2020-12-17 18:54:03.644000+00:00
['Coronavirus', 'Covid 19', 'Technology', 'Public Health']
847
Misconceptions about Software Engineers
photo courtesy of google.com On one of my random tours on twitter, I came across this topic on what misconceptions people have about Software engineers or Software developers (I don’t get the hullabaloo concerning the titles but that is a story for another day…) I am a Software Engineer in Kenya and for the past two years, I have had to explain what I do to practically everybody. I do understand the confusion, especially from Generation X i.e my parents who only knew medicine, teaching and accounting as professions. They have had to learn it the hard way with the pace at which technology is moving. For Millenials though, It is sheer ignorance if you don’t know what a software engineer does. Spare a couple of minutes and “google” that, thank you! ☺️ For most of my career, the biggest misconceptions people have had is that I am the IT guy (or lady in this case). You are the first person people run to when their phones aren’t working, their flash disks are unreadable or they require a quotation for a new laptop. For some, this would probably be a business opportunity but NO! we are not the IT handyman/woman who you call when you can’t switch on your computer because you forgot to plug in the monitor cable. We are the people who design, develop, maintain the websites and applications you use. You know the Facebook app you use? there is a person who created that. That person is a software engineer. I don’t mind fixing the phones and laptops but I do mind when my competence is placed on a scale for a job that is not mine. Please spare us when we don’t know what the issue is with your machine. We try our best but it doesn’t mean we got all the solutions to your computer problems. The second misconception is that I am a hacker, lol! This gets funnier each time I hear it. What is Hollywood and sci-fi movies doing to you guys!? I respect hackers especially ethical hackers whose aim is to raise awareness in cybersecurity but not all of us are hackers. Some of us barely know how to impersonate a user leave alone penetrate a system or create spyware. Our job involves trying to implement the best security policies for your application and not to steal people passwords for illegal practices. The last misconception that I get to hear is that we are always coding. gif courtesy of giphy.com We indeed love our job and it’s our passion but we also have a life and other interests! If you get to know us, you will know we have a whole other life outside code. Our lives don’t revolve around code but it does occupy the majority of our time. These are the three common misconceptions that I get to hear regarding my career. I am sure that there are many more that I haven’t mentioned but at the end of the day, regardless of what people think you do, find joy in doing your job.
https://medium.com/dev-genius/misconceptions-about-software-engineers-dfa9c321c67b
['Raycee Mwatela']
2020-06-19 07:35:49.771000+00:00
['Software Engineering', 'Technology', 'Software Development', 'Developer', 'Code']
848
How I’ve Used Miro to Launch 5 Success Projects — Why I Love Miro
The ingenious design feature of the moving pointers is that it kills two birds with one stone. One use case automatically serves a second use case. The first use case for the live movements of the cursors is confirmation that your team members are in the Miro whiteboard space. The second use case is feedback. When I collaborate with others and I ask them, “Hey do you see this feature right here?” — The moment their cursor moves to the section in question, I’ve received the feedback that they do see the feature I am talking about before they respond. Implicit feedback, plus, this is the type of detail I pay attention to when using this product. 2. Marketplace Apps, API, & SDK I personally don’t use these features as most of my use cases are solo work. However, I feel like I can’t write about Miro without touching on the true capabilities of Miro’s marketplace. I’ll briefly talk about these features. Just as they had 100’s of templates to choose from, their moneymaker is in their marketplace app integrations that allow teams & companies to use other services they pay for within Miro and vice versa. They have over 50 apps that integrate within Miro. To just name popular ones: Slack, Google Suite, Microsoft Suite, Salesforce, Github, Notion (embed native option), Trello (embed native option), Hubspot, Evernote, and Zapier Miro allows you to access these apps within your Miro Board so that in Real-Time you do not have to leave one application to access the other. If you have a unique tech stack and don’t see something you like, you’ll most likely need to use their API or SDK that allows you to customize almost any application you want. It seems they have a GitHub page with open source examples. These are all enterprise features that probably cost a lot. The Free Version For me, I’ve never paid Miro for their product (sorry Miro). If you are a solo user like me (most of the time), you can get by without paying. Here is what you get with the free version of Miro: Unlimited Team Members Maximum 3 Boards No controlled access (you can’t choose who has access to which Board, they get access to all Boards) All of the core integrations except the workflow apps such as Jira, Kanban, and Asana configurations However, this is ABSOLUTELY enough to get the job done. Here are a few growth hacking strategies I’ve learned over the years in order to get the most of my free Miro access: Invite as many people as you want into your Board space. Each Board is an infinite space. If you need more than 3 Boards, start utilizing relative regions within one whiteboard space. Use frames to delineate the region. If you are a solo user, there is no reason not to use the infinite space to separate out your work. If you want to create custom templates that you’d want all your users to use (or scale), just make the template once and copy & paste the existing template into the new or existing Boards. On the Miro website, it says the Kanban plugin is for Team (paid) version, however, I have access to it with the free version…so use the Kanban if you want! If you pay for Jira, Trello, or other workflow management software, you won’t be able to use them integrated within the Miro board. However, you can create a template version. My advice would be to just use the template version in Miro. With limited commenting and collaboration features, instead, you can literally type your comments within the Miro space using color-coded rules that match a user's name. A little hacky, but it works! A few features that cannot be worked around are the privacy settings and high-resolution exports. I didn’t mention this earlier, but you can’t export high-resolution PDFs and image files from your Board. You can only do low-resolution in the free version. Additionally, if you want to invite someone to collaborate on one Board, but DO NOT want them to see another board you have, tough luck. You are open for business. No way around this. In my case, I trust the people I am inviting to my Boards, plus it’s not the biggest deal if they see what I am working on. Of course, everyone’s case is different. The real why I love using Miro is in their prioritization of design thinking. In my mind, this is how I foresee the design process occurring once their VP of Product decides to brainstorm on a feature for one of their products. For example, I've laid out my assumptions of how things work in the design team at Miro (I have no clue but I’m just assuming based on my personal use cases). The following are questions that I feel get asked in this order to design the products they do. Let’s use my personal favorite feature of Miro to walk through this process: Built-In Templates (visual below)
https://medium.com/skilluped/how-ive-used-miro-to-launch-5-success-projects-why-i-love-miro-6c80e3f35465
['Drew Teller']
2020-12-09 00:19:44.412000+00:00
['Technology', 'Design', 'Productivity', 'SaaS', 'UX']
849
Can’t Take My Eyes Off You Chord
[Intro] C [Verse 1] C You’re just too good to be true Cmaj7 Can’t take my eyes off you C7 You feel like Heaven to touch F I wanna hold you so much Fm7 At long last love has arrived C And I thank God I’m alive Dm7 You’re just too good to be true G7 C Can’t take my eyes off you [Verse 2] C Pardon the way that I stare Cmaj7 There’s nothing else to compare C7 The sight of you leaves me weak F There are no words left to speak Fm7 But if you feel like I feel C Please let me know that it’s real Dm7 You’re just too good to be true G7 C Can’t take my eyes off you [Chorus] Dm7 I need you, baby G And if it’s quite alright Em7 I need you, baby Am7 To warm the lonely nights Dm7 I love you, baby G C Trust in me when I say: Dm7 Oh, pretty baby G Don’t let me down, I pray Em7 Oh, pretty baby, Am7 now that I found you, stay Dm7 And let me love you, baby Bb7 A7 Let me love you [Verse 1] D You’re just too good to be true Dmaj7 Can’t take my eyes off you D7sus4 You’d be like heaven to touch G I wanna hold you so much Gm7 At long last love has arrived D And I thank God I’m alive Em7 You’re just too good to be true A7 D Can’t take my eyes off you [Chorus] Em7 A7 I need you baby, and if it’s quite alright F#m7 Bm7 I need you baby, to warm the lonely nights Em7 A7 D I love you baby, trust in me when I say Em7 A7 Oh pretty baby, don’t let me down I pray F#m7 Bm7 Oh pretty baby, now that I’ve found you stay Em C7 B7 And let me love you baby, let me love
https://medium.com/@sephiaa08/cant-take-my-eyes-off-you-chord-cb0f758c7146
['Sephia Ananda']
2020-12-01 14:59:28.396000+00:00
['News', 'Life', 'CEO', 'Startup', 'Technology']
850
Microsoft is Lonely at The Top
The Transition from Windows is Full and Final Photo by Tadas Sar on Unsplash On October 28th, 2020 Microsoft Corporation launched its first industry-specific cloud, Microsoft Cloud for Healthcare, which seeks to integrate the growing bundle of Microsoft’s software application and infrastructure layers into a single product. An extremely difficult task but, if executed well, it will help cement Microsoft’s position at the top of the growing Cloud industry. “This end-to-end, industry-specific cloud solution includes released and new healthcare capabilities that unlock the power of Microsoft 365, Azure, Dynamics 365, and Power Platform,” wrote Tom McGuinness, Corporate Vice President, Worldwide Health, Microsoft. It’s a heavy lift, bringing all these pieces together and creating a seamless experience across multiple applications, while also sprinkling them with enough properties to address the requirements specific to a particular industry. But you would rather be in Microsoft’s shoes than those of its competitors. Image Source: Microsoft Microsoft took nearly a decade to build its capabilities in the software application layer (office 365, Dynamics 365, LinkedIn) and the infrastructure layer (Azure), while slowly connecting all the different pieces as parts of a single, unified platform. This unification process of bringing disparate individual systems together and making them work seamlessly for a customer will be a never-ending process that will continue as long as Microsoft Cloud is alive. Now, with the first industry-specific cloud, Microsoft is attempting to take things up a notch and move from creating a platform that can be used by a single client into a platform that can be used by specific industries, such as Healthcare. It will not be far-fetched to think that Microsoft Cloud for Healthcare is just the first of many industry-specific cloud products that will be launched by Microsoft. Sign-In is Simple, but Sign-out is Not Industry-specific cloud products will be notoriously difficult for clients to move away from. To this day, Oracle’s (ORCL) penetration on the database side is what is keeping the company alive in the cloud race, as clients who got entrenched with Oracle databases find it extremely hard to migrate to other cloud providers. The transition has been slow and the stickiness of the products has allowed Oracle an enormous amount of time to stay alive in a cloud industry that has run far ahead of it. Microsoft’s industry-specific cloud offerings will achieve the same level, if not more, of stickiness with its clients. Once the client starts using all the different software and infrastructure products offered by the Microsoft Cloud Platform, it will be extremely difficult for them to migrate to another service provider. The migration will not be impossible but will be very difficult to execute, and a time-consuming process that most CTOs would rather not engage in unless there is an overwhelming advantage. The migration process will only be accepted if it offers enough benefits in the form of cost or if it offers a significant technological edge. Typically, decision-makers look for signs of both cost and technology advantages to be present to move from one service provider to another. But Microsoft will do everything it can to keep the cost of migration higher and benefits of migration lower. The Global Cloud Industry is still on a Growth Trajectory Microsoft is the leader of the cloud industry but it is still growing faster than the number two player, Amazon (AMZN). In the most recent quarter, Microsoft CEO Satya Nadella announced that Microsoft Commercial Cloud reached $15 billion in revenue, a growth of 31% year over year. Amazon Web Services reported $11.6 billion in revenue during the most recent quarter, a growth of 29% year over year. When the number one player grows faster than the number two player, it is a sign that its position at the top is getting stronger. Through the last eight quarters, both Microsoft and Amazon Web Services increased their cloud revenues as the pandemic helped the cloud industry to grow even further. AWS quarterly revenue increased by more than 29% in each of the last six quarters, while Microsoft achieved better growth than Amazon Web Services, excluding Q4–2020 Data Source: Microsoft, Amazon Both companies held on to their operating margins; Amazon Web Services reported an operating margin of 30.5% in the most recent quarter, while Microsoft’s Productivity and Business Process segment reported an operating margin of 46.32% and the Intelligent Cloud segment reported an operating margin of 41.75%. This clearly shows that both companies did not sacrifice their margins to achieve a high growth rate. The top two global cloud players have proved beyond a reasonable doubt that they were not only resilient to the global economic distress caused by the pandemic but that they were able to take advantage of the transition in technology usage due to the pandemic. The growth rate could further improve once the global economy starts recovering. According to Forrester, “the global public cloud infrastructure market will grow 35% to $120 billion in 2021”. Microsoft will continue to grow closer to the industry average. The size of its top line, $15 billion in quarterly commercial cloud revenue, makes it harder for Microsoft to grow faster than the industry average. But the Windows maker may soon be standing alone at the top.
https://medium.com/illumination/microsoft-is-lonely-at-the-top-ca202d653c8b
['Shankar Narayan']
2020-11-29 04:16:10.114000+00:00
['Technology', 'Illumination', 'Business']
851
Being Compassionate — The missing “value” of Tech teams
Wondering what “compassion” has to do with software engineers or with coding? Here’s how… What’s the problem? All along my career in the IT industry, I’ve seen a lot of my friends and team members suffer from stress, burnout, anxiety over the years. The reason being many of their team members, the so-called programmer ‘jerks’ get agitated when people make mistakes, failing to understand what is simple for them is difficult for others, expecting them to work at their pace, imposing their perspective with the belief that they are always right and exhibit cold behaviour when things don’t work out their way. Things have gone so awful sometimes that it has driven people to leave this industry once and for all. Are you a programmer jerk? The one stop solution — Be human. Being human is choosing to be compassionate. Transform yourself into a Mentor instead of being a ‘tor’mentor! Here’s what you can do to transform yourself into a Mentor… 1. Choose patience over losing your temper: “Patience, grasshopper,” said Maia. “Good things come to those who wait.” “I always thought that was ‘Good things come to those who do the wave,’” said Simon. “No wonder I’ve been so confused all my life.” ― Cassandra Clare You may be the go-to JAVA programmer in your portfolio and you are helping a fresher to install the latest version of JAVA. You realise that your mentee is struggling and you could very well get impatient and say, “What are you trying to do? Can’t you just use the command brew cask install java ? Hold on! Remember that your mentee is new to JAVA, and he/she may not even be aware of the command line options or Homebrew. The gravest problem is that when we become experts, we fail to remember what it was like to live without this knowledge when we were freshers. Learning doesn’t happen overnight. Everyone has their own pace and grasping power. Patience is the first tool that you should add to your mentoring toolkit. 2. Choose engagement over control: “Control leads to compliance; autonomy leads to engagement.” ― Daniel H. Pink You are doing a code review and the moment you see the block of code you say,” That’s completely wrong. This is the way to do it. Refer to this repo”. PERIOD. There ends the conversation. You can be happy that the code was fixed the way you wanted. But it was done out of sheer fear and nothing else. Instead, choose to explain why you thought that the logic was wrong and what were the business reasons behind your solution. Furthermore, you can open up the doors of the conversation by asking, “Do you think there are alternate options to implement this logic?” Now, you have established the connection, got them engaged, made them curious, triggered their creativity, and improved their understanding as well. Gotcha? 3. Choose humility over ego and arrogance: “On the highest throne in the world, we still sit only on our own bottom.” ― Michel de Montaigne Mentoring is not something you do to someone but with someone. It is a learning partnership and superiority or authority has nothing to do with mentorship. The amount of expertise or the years of experience or the number of grey hairs don’t count. The best trait of a mentor is to be humble and ever ready to learn. The next time your junior engineer talks about a new library that would help you avoid boiler plate code, get curious instead of arrogantly commenting that you know better. Humility allows you to see the innate worth in others, which in no way interferes with your ability to see the innate worth in yourself. You can never be compassionate with someone while having a feeling of superiority over them. Crush your E.G.O. Take a step backward so that you allow your mentee to step forward. 4. Choose kindness over rudeness: “Three things in human life are important: the first is to be kind; the second is to be kind; and the third is to be kind.” ― Henry James Your tone and the language you choose make an ocean of difference. The words that we use to communicate really do matter. An unkind word can break someone’s self-confidence and dignity. # Always avoid using ‘just’ or ‘even’ or ‘still’— Can’t you just use this command? or “Are you not aware of even this?” or “You still haven’t figured this out? # Ask “Could I help you solve this deployment issue?” or much better “Shall we work together to solve this?” instead of “Do you need someone’s help to deploy?”. The latter can put them in a spot and make them wonder “Am I not knowledgeable enough to do this?”. # “This is so damn easy, you didn’t know this?” — How hard it is for us to remember the mid-night lamps we burnt to become proficient in something that it feels easy for us now? # Never modulate your tone to make any comment sound like mockery # Avoid sarcasm in any form # Review the code and not the person 5. Choose to enjoy the journey over reaching the destination: “For me, becoming isn’t about arriving somewhere or achieving a certain aim. I see it instead as forward motion, a means of evolving, a way to reach continuously toward a better self. The journey doesn’t end.” ― Michelle Obama There are times when you did your best but still your mentee didn’t meet your expectations. They may not be interested or your values don’t match. As simple as that. No worries — Mentoring is giving your best to the fullest capacity possible and is not tied to the measure of outcome produced. Not all mentoring relationships thrive, you gotta be lucky enough!!! Be ready to replace your expectations with appreciations. Yes — this time you won the crown. The appreciation was for giving your best!!! “Mentoring concentrates on the needs of the one being mentored, not on the agenda of the mentor.” -David Stoddard The END of a ‘tor’mentor. The journey begins as a ‘mentor’… “The attempt to become a compassionate human being is a lifelong project.” -Karen Armstrong Don’t you feel tired and exhausted by trying to prove yourself a ‘jerk’ all these years? Don’t you feel unhappy, depressed, and desperate when your fellow team members never reach out to you for help despite you being the ‘know-all’ jerk programmer? Did you ever count the number of self-help books that you purchased / read / re-read? Get to a cozy corner. Sit down and reflect: Is developing your technical skills in the latest tech stack and being known as an “awesome” programmer the sole purpose of your life? Is proving your worth as a ‘rockstar’ the only goal to accomplish? There is much more to life. Choose your core values. Retrospect on where you stand today. Determine who you want to be. Forgive yourself and apologize whole-heartedly for having fallen short of your values by being egoistic and arrogant to others all these years of your life. Make small changes to your behaviour every day, every week, every month — you may slip and fall off, return to your old comfortable “jerky” ways — but commit yourself to compassion every single time and believe that today you can be a better person than yesterday and tomorrow better than today!!! It’s time to take a pause. It’s time to care about other people’s emotions. It’s time to rise by lifting others. It’s time to fill in the missing “value” of your tech team — by Being Compassionate!!!
https://medium.com/an-idea/being-compassionate-the-missing-value-of-tech-teams-5650ed1bd59c
[]
2021-01-04 05:33:23.711000+00:00
['Values', 'People', 'Compassion', 'Technology', 'Teamwork']
852
Solving Common Vue Problems — Props and Routes, and More
Photo by Diego Jimenez on Unsplash Vue.js makes developing front end apps easy. However, there are still chances that we’ll run into problems. In this article, we’ll look at some common issues and see how to solve them. Passing Props to Vue Components Instantiated by Vue Router We can pass props to the component that are instantiated by Vue Router by passing our props straight into router-view . For example, we can write: <router-view :prop="value"></router-view> Then in our components, we write: props: { prop : String }, Then our component can access the prop prop. The type of prop is a string, so we can pass a string to router-view and access them in the route components with this.prop . Difference Between the Created and Mounted Events The created hook is run before all of the components is loaded. The DOM hasn’t been mounted or added, so DOM manipulation can’t be done on the created hook. mounted is run after the DOM has been rendered. So we can access DOM elements in there with refs. Adding Debounce for Events We can delay the execution of event handlers by using the debounce NPM package by writing: <input @input="debounceInput"> methods: { debounceInput: debounce(function (e) { this.$store.dispatch('updateValue', e.target.value) }, 200) } We can use debounce by using the debounce function from the package. The function that we pass inside will run after a given amount of time. We passed in 200 in the second argument, so the function is delay 200 milliseconds. We can also use the debounce Lodash method, we can write: <input @input="debounceInput"> methods: { debounceInput: _.debounce(function (e) { this.$store.dispatch('updateValue', e.target.value) }, 200) } Then we get the same result. Hide Vue.js Syntax While the App is Loading To hide the Vue.js template syntax while the app is loading, we can use the v-cloak directive. For instance, we can write: <div v-cloak>{{ text }}</div> The v-cloak attribute will be added to the div when the app is loading. Therefore, we can hide the code while it’s loading with CSS. So we can hide it with CSS: [v-cloak] { display: none; } Get Selected Option on Change We can get the selected option on change by passing in the $event object to an event handler. For example, we can write: <select name="fruit" @change="onChange($event)" v-model="key"> <option value="apple">apple</option> <option value="orange">orange</option> </select> <script> const vm = new Vue({ data: { key: "" }, methods: { onChange(event) { console.log(event.target.value) } } } </script> We have the onChange method that takes an event object. Then we can pass in $event and get the selected value with event.target.value . Reset a Component’s Initial Data We can reset a Vue component’s to the initial data by getting the initial data with this.$data For instance, we can write: data(){ return { foo: 'foo', bar: 'bar', initialData: undefined } }, ready(){ this.initalData = this.$data; }, methods:{ resetWindow(){ this.$data = this.initalData; } } When the component loaded the data, we set the this.initialData state to this.$data . this.$data has the data that’s returned by the data method. Now when we want to reset the component data to the initial data at any point, we can set this.initialData to this.$data to reset the data to the original data. Reference Static Assets We can use the @ symbol to get the path of the src folder. Therefore, we can write: <img src="@/assets/images/pic.png"/> to access the assets folder, which is inside the src folder. Set the title Tag’s Content To set the title tag’s content, we can use the vue-headful library. To use it, we write: import Vue from 'vue'; import vueHeadful from 'vue-headful'; Vue.component('vue-headful', vueHeadful); ... to register the vue-headful component so that we can use it in our components. Then we can write: <template> <div> <vue-headful title="Title" description="Description" /> </div> </template> We set the title tag’s content with the title prop. The description sets the meta tag’s description attribute's value. Photo by Dragos Gontariu on Unsplash Conclusion We can pass props to router-view and the data can be accessed in the components instantiated with Vue Router. We can set the title and meta tags with the Vue Headful library.
https://medium.com/javascript-dots/solving-common-vue-problems-props-and-routes-and-more-29e8b13df45a
['John Au-Yeung']
2020-06-29 09:30:00.964000+00:00
['Technology', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
853
What To Do When Widgets for non-CMS Websites Become Extinct
A screenshot from Peace in Practice. Photo provided by Celine Lai In 2002 I designed, created and uploaded my website “Peace in Practice” to the World Wide Web. This was in the times of the ICQ internet relay chat, MySpace (before Facebook) and beautiful customizable online forums run by Yuku, Mixt and Ning. There was a sea-life widget on my homepage, and using Stat Counter I found that every day there was a view of that page. That was until the widget was no longer maintained! Widgets and footer bars for websites not running on proprietary platforms soon became “dinosaurs.” “We want my Fish widget back!” Here is the tale of a “widget withdrawal” experience. If you do a Google search upon “widgets for websites” you will find “Elfsight” which has widgets for social media, marketing, selling, and communication; but no Sea-life Widget. I reminisce over the good old days when independent website creators could find delightful entertaining and realistic or fun widgets for their web pages. These included not only clocks and calendars and standard items on modern WordPress blogs and other websites, but things like bubbles and unicorns and yes, sea-life. The closest today that I can find of a fish widget is one where tadpole-like orange fish chase your cursor around. Of course, there are amazing fish or aquarium screen-savers, with fish you can choose, and metrics you can use to tailor your fishy scene. But some of us “oldies” feel deprived of a range of quality website widgets. My website, PIP, was a labour of love, mainly my love. Though I no longer update this site, I am proud of it, even if it has the longest homepage on Earth, and even if the code behind it is garbled and terrifying. With want and will and creativity, I adorned the homepage with an animated marquee (which a friend kindly created for me) and inter-wove many other hopefully interesting or enticing elements! I used the good old fashioned tried and true explorer’s way. I went on a “search and use” mission. This meant scouring the WWW back then, looking at hundreds of other websites and at what elements they used. I then peeked at the source code to understand how they had programmed these elements. Microsoft FrontPage and coding by hand was used to create all pages of “Peace in Practice”. My tuition for this included a one-day course on learning about HTML (hypertext mark-up learning). This was a new skill that I was eager to develop. I leaped from baby steps, trying to shove my training CD into the floppy-disk drive, to turning out my PIP site. Originally my website had a diagonal yellow moving banner for “Ten Million Clicks for Peace” across the top corner, which annoyingly (to some) moved down the page as one scrolled down. I thought this was pretty cool, especially because it linked to a personal “Peace Meter”. Gadgets are good. I like groovy or useful gadgets for websites. I like sea-life widgets too. Another victim of modernisation was the Skysa footer bar. This was a footer widget with pop-up tabs for games, a search field, and a Comments page, and it was customisable. So I had a “Go to the top” function enabled in the Skysa bar. The games were great. I loved playing them until one day I found the bar no longer there. If you google “Skysa bar” you will find it last mentioned in 2012. You might notice the script part for the Skysa bar still pops up when downloading the PIP home page, if you wait long enough for it to fully download. I left it there for sentimental reasons. I am not in a competition to win the “Best Website of the Early 2000s” after all. Hey, programmers out there. Not all websites run on a “professional” content platform system, like WordPress or Drupal or what-have-you! Some of us still host our own websites as collections of HTML pages. Back in 2002 there were uplifting web-rings or circles of websites, which were groups of related sites which one could move to from one to the next easily. Nowadays these have been replaced by SEO and browser searches and links from social media pages. I loved those web-rings, which no longer exist. If you go to the “Resources” page of PIP, you will see great “holes”, like no current clock (widget gone) and maybe a message that you need Adobe Flash Player. Flash Player will be discontinued after December 2020 so I should really remove the phantom clocks from that page, too bad that I like those clocks. They are round clock-faces with analogue time. I’ll look for a website-ready widget for world clocks sometime, if I could be bothered. It’s some consolation that my three month calendar running on JavaScript is still on the Resources page. 😅 It’s some dis-consolation to me that this Page may look a mess on some computers because of the mishaps with coding behind the scenes. Even today I get people emailing me asking me to add resources to the page, which amazes me. I reply that I am no longer updating that page. I don’t add that if ever the page is re-worked, I will then start checking the links regularly and will update or add to the page. Yes I can dream! The “Reflections” page used to have background music playing when you went to it. But one day in a fit of misgivings about the preferences of my would-be audience, I removed it. But now I miss that music too, and I wish I could remember what the name of it was! Back full circle to the beginning of the tale. I believe that when the homepage had its wonderful sea-life widget, that a person had the PIP homepage to start-up when opening their internet browser. I can imagine that person enraptured with the beautiful and soothing and colourful, water-filled scenes. Perhaps the poor person was stuck immobile somewhere (or not), and would gaze at the fishes swimming around serenely, for hours. Then …….the daily views stopped when the widget went. I was chagrined, not because I had lost a viewer. Oh no, I thought, why the heck did they take down the Widgets?? What about my fishy fan? Both that person and I really missed the widget. I hoped that the viewer found a suitable replacement quickly. Not like me. It has taken me years to work out what to do. First I desperately scoured the internet country-side, looking in vain for a fishy widget. I found some for a Mac computer, but I don’t have a Mac. Then it dawned upon me that videos on YouTube can be embedded. Could that be my solution? The answer is: Yes. This is the alternative that I have found. Because widgets of old have become the victim of browsers not accepting Java apps, and of quick and easy builds on professional self-hosted or other hosted sites for profits or businesses, sea-life widgets for the hobbyist have gone AWOL (i.e. Absent Without Leave). Not to forget that website design for mobile phones don’t include large or complex widgets, so why have ocean widgets for a website that will “never” be viewed on a desktop computer? I found “The Best 4K Aquarium for Relaxation” and I love it a lot. I embedded it into the homepage of “Peace in Practice”. Hooray, beautiful ocean fish have come to the door of my internet home! Too bad if an advert pops up now and then. Just click on the X to close it. It’s the price to pay for a free embed. You can go Full Screen too, and that is excellent. You or I can watch fish for 2 hours on the homepage of PIP. Why would you, somebody may ask? My answer is: “Well, YOU might not want to do so if your first go-to for ocean fish scenes is YouTube or Social Media or a fantastic modern website with the bees-knees in current technology; but there are people, believe it or not, who stumble across Peace in Practice.” Maybe the person who had PIP running daily will re-visit, or another visitor will find and love my homepage because of the embedded fish scene. For years I was sad because the widgets for websites that I wanted, were no longer being created. But now there has been a renaissance, and that is due to my going with what I can get! This is a lesson for all, to keep up with changes and to use what’s available, but at the same time, keep supporting the use and the design and development of the products that you love. Never give up on them. Never surrender your dreams. I am still waiting for an innovative Programmer to return back to the basics of developing and maintaining widgets for personal and hobbyist websites on desktop PCs. I may have to pay for it, but the first step is to get my sea-life widget back (without the adverts or using YouTube). My “Peace in Practice” site may entirely be a fossil, but to me it is my sweet-heart. Such is the stuff that dreams are made of. 😍 The times have changed, but that doesn’t mean that what was replaced has no value. The widgets of old are a part of our history, and all websites have a role or use. In my case, “Peace in Practice” is my single-handed way of trying to encourage…..well….putting peace into practice. Now that I have fish on the homepage, I feel better now.
https://medium.com/the-innovation/what-to-do-when-widgets-for-non-cms-websites-become-extinct-f7984ee08b3b
['Celine Lai']
2020-09-05 07:48:14.771000+00:00
['Technology', 'Productivity', 'Website Design', 'Programming', 'Widget']
854
The truth about the new SpaceX ‘Mini-Bakery’
Yes, you heard that right! SpaceX has a new mini-bakery! But it’s not feeding all the hungry SpaceX engineers working on the Starship at their Boca-Chica site. Instead, they make their heat shield tiles here. SpaceX Van outside of the mini-bakery in Florida As Starship prepares for its first orbital flight, the thermal protection system will play a crucial part in making the mission a success. The SpaceX factory which makes the tiles have been widely rumored around on the internet for many years. But in 2019 we started seeing some SpaceX vans outside, something that looked more like a warehouse in Cape Canaveral, Florida. But oh boy, was it a normal warehouse, in May 2020 there was a site inspection that gave us a much closer look at the new SpaceX facility. It’s said to have 20 employees at the time of inspection, they run 24 hours a day and they work 7-days a week, in 3 shifts. The facility is said to be about 40,000 Sq.ft in size. The number of employees at the time of writing this blog must have surly gone up as the SpaceX Starship prepares for its first orbital test flight. Space Shuttle heat shield tiles So now that we know where they are made, let’s talk about how they are made. The new SpaceX heat shield tiles are very similar to that of NASA’s Space Shuttle’s thermal protection system. They need to be able to withstand very high temperatures during the re-entry. For this, they need to have low thermal conductivity and high specific heat capacity and melting point. Elon Musk has mentioned that the tiles are made out of Silicon and Aluminum Oxide. The tiles are 90% air and 10% Silica and they are a bit like hard form. That’s because Air has a low thermal conductivity and high specific heat capacity. This is very similar to those of the Space Shuttle. 1/2 of the new SN20 is covered with these tiles. SpaceX SN20 Covered in the new heat shield tiles If we take a closer look at these tiles, we can see that they are labeled with red and green stickers. Red and green labels on SN20 Closer look at labeled tiles The tiles with red ones are found to have been broken or damaged somehow during the inspection. The ones labeled in green are found to be misaligned during the fitting. Now, this might be one of the very first and very important problems for SpaceX to solve. It’s not as simple as baking a few foams like tiles and sticking them on the Starship. For the Starship to be fully reusable, it needs to avoid such inspections. To understand this better, SpaceX plans on reusing the Starship at least three times a day. NASA’s Space Shuttle had a similar technology and it took literal months for inspection and maintenance between launches. The main reason for this is that the Space Shuttle had a much more complex shape than the SpaceX Starship. NASA’s Space Shuttle Thermal protection system(heat shield tiles) on the Space Shuttle The Space Shuttle had many different shapes of tiles, and during the launch ice would fall from the main tank hit these tiles thus, damaging them. SpaceX has about 15,000 tiles compared to 20,000 of the Space Shuttle. The Space Shuttles tiles were glued in place, but SpaceX uses a red robot to weld the mounting pins onto the body of the Starship and a person just comes along and gives it a nice push into place. The reason they choose the hexagon shape is that if they were to go with for example a square shape. then the heat would go between the tiles and the body of the Starship would be exposed to the Starship. The thermal protection on Starship is much more simpler and efficient than compared to that of the Space Shuttle. On June 7th 2021 a Boca Chica watcher and Twitter user @StarshipGazer took some really good photos of a few shipments from the so-called “mini-bakery”. One of them was a wooden crate and was labeled incoming mini-bakery. This could mean that they are moving the mini-bakery, near the production site. Again this cannot be confirmed yet, since it could be some more tiles from Florida as well. What do you think about new SpaceX ‘mini-bakery’? Let me know in the comment section below!
https://medium.com/@adityakm24/the-truth-about-the-new-spacex-mini-bakery-19b7dd55bc3b
['Aditya Krishnan Mohan']
2021-09-05 18:02:10.750000+00:00
['Spacex', 'Space', 'Mars', 'Space Exploration', 'Technology']
855
IoT analysis | Four Computing Types Of IoT
From a practitioner's perspective, I often see the need for computing to be more available and distributed. When I started to integrate the Internet of Things with OT and IT systems, the first problem I faced was that the amount of data sent by the device to our server was too large. I work in a factory automation scene, we integrate 400 sensors, these sensors send 3 sets of data every 1 second. Data problem Most of the sensor data generated is completely useless after 5 seconds. We have 400 sensors, multiple gateways, multiple processes, and multiple systems, and we need to process these data almost simultaneously. Most proponents of data processing support the cloud model, in which you should always send something to the cloud. This is also the first IoT computing foundation. 1. Internet of Things Cloud Computing Using IoT and cloud computing models, you can basically push and process sensory data in the cloud. You have a receiving module that receives data and stores it in a data pool (a very large storage space), then applies parallel processing to it (maybe Spark, Azure HD Insight, Hive, etc.), and then uses this information To make a decision. Since I started to build IoT solutions, we now have many new products and services that allow you to do this very easily: 1) If you are a loyal supporter of AWS, you can use AWS Kinesis and big data lambda services. 2) You can use Azure's ecosystem to make it very easy to build big data functions. 3) Alternatively, you can use Google Cloud Products with Cloud IoT Core and other tools. Some of the cloud computing challenges I face in the Internet of Things are: 1) Enterprises are unwilling to store their data on the platforms of Google, Microsoft and Amazon. 2) Delay and network interruption issues. 3) Increasing storage costs, data security and durability. 4) Usually the big data framework is not enough to create a large receiving module that can meet the data requirements. 2.Fog Computing for the Internet of Things With fog computing, we become stronger. We now use local processing units or computers instead of sending data all the way to the cloud, waiting for the server to process and respond. Four to five years ago when this function was implemented, we did not have wireless solutions such as Sigfox and LoraWAN, and BLE did not have mesh networking or remote capabilities. Therefore, we must use more costly network solutions to ensure that we can establish a secure and durable connection with the data processing unit. This central unit is the core of our solution, and there are few dedicated solution providers. My first implementation of fog computing was in oil and gas pipeline projects. The pipeline generated several terabytes of data, and we created a fog network with appropriate fog nodes to calculate the data. Since then, what I have learned from implementing the fog network: 1) It is not very simple, you need to know and understand many things. Building software or our work in the Internet of Things is more direct and open. In addition, when you use the Internet as an obstacle, it will slow you down. 2) Such an implementation requires a very large team and multiple suppliers. Open Fog and its impact on fog computing Open Fog (https://www.openfogconsortium.org/) computing framework is used for fog computing architecture. It provides: Example Test Bench Technical specifications And reference architecture 3.Edge Computing of the Internet of Things The Internet of Things captures micro-interactions and responds as quickly as possible. Edge computing brings us the closest to the data source and allows us to apply machine learning in the sensor area. The difference between edge and fog computing is that edge computing is entirely the intelligence of sensor nodes, while fog computing is still a local area network that can provide computing power for data-heavy operations. Industry giants such as Microsoft and Amazon have released Azure IoT Edge and AWS Green Gas to promote machine intelligence on IoT gateways and sensor nodes with outstanding computing capabilities. These are excellent solutions that make your work very easy, but it has greatly changed the meaning of edge computing that our practitioners understand and use. 4. MIST calculation of the Internet of Things We see that we can do the following to promote the data processing and intelligence of the Internet of Things: Cloud-based computing model Fog-based computing model Edge computing model We can simply introduce the network functions of IoT devices and distribute workloads, using dynamic intelligent models that neither fog nor edge computing can provide. This type of calculation can complement fog and edge calculations and make them better. Establishing this new model can realize high-speed data processing and intelligent extraction from devices with a memory size of 256kb and a data transfer rate of about 100kb per second. I dare not say that this technical model is mature enough to help us deal with IoT computing models. But for the Mesh network, we will definitely see the facilitator of such a computing model. Personally, I have spent some time implementing MIST-based PoC in the laboratory, and the challenge we have to solve is the distributed computing model and its governance. However, I am 100% sure that soon someone will come up with a better MIST-based model that all of us can easily use and use.
https://medium.com/@ashikquerrahman/iot-analysis-four-computing-types-of-iot-c3756e116288
['Md Ashikquer Rahman']
2020-12-12 09:40:33.277000+00:00
['IoT', 'Internet of Things', 'Internet', 'Technology', 'Iot Analysis']
856
ISWYDS exploring object detection using Darknet and YOLOv4 @Design Museum Gent
After repeating these steps for all our images we landed on 3000+ images featuring 37 classes (some images containing over 15 classes). Picking your guns. When it comes to object detection there are many options to pick from, but for our case, we will be using Darknet an open-source neural network framework written in C and CUDA to train our algorithm of choice; YOLOv4. You Only Look Once or YOLO is a state-of-the-art, real-time object detection system making R-CNN look stale, it is extremely fast, more than 1000x faster than R-CNN, and 100x faster than Fast R-CNN. Another good thing about YOLO is that it’s public domain and based on the license we can do whatever we want to do with it...🧐 YOLO LICENSE Version 2, July 29 2016 THIS SOFTWARE LICENSE IS PROVIDED "ALL CAPS" SO THAT YOU KNOW IT IS SUPER SERIOUS AND YOU DON'T MESS AROUND WITH COPYRIGHT LAW BECAUSE YOU WILL GET IN TROUBLE HERE ARE SOME OTHER BUZZWORDS COMMONLY IN THESE THINGS WARRANTIES LIABILITY CONTRACT TORT LIABLE CLAIMS RESTRICTION MERCHANTABILITY. NOW HERE'S THE REAL LICENSE: 0. Darknet is public domain. 1. Do whatever you want with it. 2. Stop emailing me about it! That being said; after configuring our system and installing all the needed dependencies we can take her for a test ride — using some of the pre-trained algorithms that come out of the box, and see what she has to offer. As we can tell from the results [pictures below], YOLOv4 comes with some pre-trained classes such as a chair, vase, and of course the very -uncanny- human. Let’s not make use of that last one, shall we? Although it missed out on some of the more obscure looking vases, we will be using these pre-trained weights as building blocks to create our very own. The goal here is to output new classes based on the object-number of the objects depicted.
https://medium.com/@oliviervandhuynslager-75562/i-see-what-you-dont-see-exploring-object-detection-using-darknet-and-yolov4-330ada17767f
["Olivier Van D'Huynslager"]
2020-11-12 12:26:04.417000+00:00
['AI', 'Museums', 'Design', 'Technology', 'Object Detection']
857
In the Future We Will All Live in Star Wars
Photo by Hikersbay Hikersbay on Unsplash “The thing you’re doing now, reading prose on a screen, is going out of fashion. … The defining narrative of our online moment concerns the decline of text, and the exploding reach and power of audio and video.”¹ Shhhhh. Welcome to the p̵o̵s̵t̵-̵t̵e̵x̵t̵ ̵f̵u̵t̵u̵r̵e̵ Star Wars galaxy, a highly functional post-literate society where we transfer information more fully and effectively via dynamic multimedia than static text, learn new things more easily and quickly from verbal and tonal memories like Luke’s Jedi tutelage, review and search archives in holographic video recordings, and converse with people who speak different languages in real time with the help of increasingly smarter protocol droids. We’ve only begun to glimpse the deeper, more kinetic possibilities of a post-literate culture in which the printed word recedes to the background and sounds and images become the universal language and a more efficient and effective means of communication. In this regard, post-literacy isn’t a societal ill of contemporary culture, but a future state in which communication returns to a more natural form of multisensory experience that may even more fully convey information, emotion, and persuasion. Ancient cultures revolved around the spoken word. The oral skills of memorization, recitation, and rhetoric instilled in oral societies a reverence for the past, the mystical, the ornate, and the subjective. Then, about 500 years ago, oral traditions were overthrown by technology. Gutenberg’s metal movable type in 1450 elevated writing to a dominant position in society. By means of cheap and perfect copies, printed text became the engine of change and the foundation of stability. Print reigned supreme primarily because “it is a pictorial statement that can be repeated precisely and indefinitely.”² From printing came literature, journalism, science, and laws. Printing instilled in society a reverence for precision, an appreciation of linear logic, a pursuit of objectivity, a culture of expertise, and an allegiance to authority whose truth was as fixed and final as a book. The machinery of mass reproduction unleashed the immense cultural power of written texts and gave birth to the greatest flowering of human achievement the world has ever seen. Print literacy became the heartbeat of Western culture, as well as a fundamental requirement for participation in the intellectual, social and economic enterprises. If you wanted to get ahead, you must learn to read and write. The intelligentsia, comprised of artists, teachers, academics, writers, journalists, and the literary hommes de lettres, arose as a status class around the globe, and high levels of literacy became a pre-requisite for a country’s economic success. Promotional posters for the Literacy Foundation clearly illustrate the foundational role that print literacy has long held in our culture, and the fear that classical reading and writing will soon die as a cultural norm. In this poster, a sickly Cinderella is hooked up to an IV with the tagline, “When a child doesn’t read, imagination disappears.” The irony of this image is that the Brothers Grimm collected and modified traditional folk tales from a disappearing oral culture, writing them down, and locking them into the now-familiar versions of the printed and published Grimms’ Fairy Tales. Source: Literacy Foundation (2012) On the contrary, orality nearly disappeared because it was an inferior medium to print in terms of information accuracy, archival, and distribution — until new technologies have turned communication on its head. The invention of the record player made it possible to record and preserve sounds for timeless retrieval and playback. In Beethoven’s day, few people ever heard one of his symphonies more than once, but with the advent of cheap audio recordings, a dabbawala in Mumbai could listen to them all day long. Then came the radio, which allowed the mass distribution of sounds to a geographically dispersed audience. Then we started giving machines ears and mouths. Today one out of every two people on this planet has a computer in their pocket that not only can decipher voices and answer back, but simultaneously has instant access to most of humanity’s accumulated knowledge. Now it’s often easier to communicate and learn through images and sounds than through text. An increasing number of A-list authors are already bypassing print and releasing audiobook originals, and audiobooks can be a far more immersive and compelling medium for conveying facts and emotions. “If written language is merely a technology for transferring information, then it can and should be replaced by a newer technology that performs the same function more fully and effectively,’” writes Patrick Tucker. “But it’s up to us, as the consumers and producers of technology, to insist that the would-be replacement demonstrate authentic superiority.”​​​​​​​³ Learning to read and write is very hard. It takes years and years of constant practice to train our brains to seamlessly convert a visual pattern of shapes into an internal auditory stream of speech that we can understand. If you don’t recall how hard it was to learn to read, just ask a child in elementary school. Clearly our brains were not designed for this type of task — it is only though force of will that we coerce ourselves to do it. Teaching reading and writing is just as difficult. A whole class of citizens dedicated to accomplishing the Herculean task has emerged. The United States, for example, publicly funds an army of 3.6 million people whose full-time duty is to teach childeren to be literate members of society.⁴ Each one of these teachers has themselves had years of specialized training just to learn how best to teach these skills. They are even required to be licensed like doctors and lawyers. Compare this to speaking and listening — tasks that our brains are evidently well-suited for. If the human brain were software, auditory verbal creation and comprehension would be built-in features which we get for free with no overhead. When we think to ourselves “I want to take a walk,” we do it instinctively by talking and listening to a verbalized inner monologue instead of creating a visualization of the letters “I-W-A-N-T-T-O-…” on an internal screen. When we speak, we use the breath in our lungs to give our thoughts a physical form. The sounds we make are simultaneously our intentions and our life force. I speak, therefore I am. Vocal learners, like parrots and humans, are perhaps the only ones who fully comprehend the truth of this. - Ted Chiang, The Great Silence Almost all children learn to talk and listen without any conscious effort. Almost all parents are competent at teaching auditory fluency without any pedagogical training. This raises the question: Why do we spend so much time, money and effort on learning, teaching, promoting and testing print literacy if it’s so hard and we humans are so innately terrible at it? Is there a better way to become literate — beyond the printed word? The literate world of Western Europe displaced and changed the oral cultures it encountered. So too will the post-literate world displace and transform literate societies. French bibliophile Octave Uzanne predicted in 1894 “the end of books,” and a future where phonographs would soon become cheap and small enough that people would switch from reading books to listening to them, even while walking. If by books you are to be understood as referring to our innumerable collections of paper, printed, sewed, and bound in a cover announcing the title of the work, I own to you frankly that I do not believe (and the progress of electricity and modern mechanism forbids me to believe) that Gutenberg’s invention can do otherwise than sooner or later fall into desuetude as a means of current interpretation of our mental products. Printing is…threatened with death by the various devices for registering sound which have lately been invented, and which little by little will go on to perfection.⁵ That prediction is upon us. We are already in the realm of the Fahrenheit 451 society where the traditional shell of the book is vanishing. While many people drown themselves in the void of mindless TV programs and social media in order to avoid thinking and living, new technologies also provide new tools and opportunities for education. Learning is no longer confined to a sheaf of pages with a spine you can grab, and the conceptual structure of a book — a bunch of symbols and ideas united by a theme into an experience that takes a while to complete — remains and may even be more enlivened than ever by technologies. There’s no better time to learn than now. On average, an audiobook goes by at 150–160 words per minute (wpm), and the average person reads words on a page at about 300 wpm. But many people listen to audiobooks at a faster speed, getting through the book in less time than actually reading the paper sheets. Not only is audiobook production constantly improving, but developments in technology have made audiobooks extremely convenient for the consumer. Unsurpringly, audiobook sales have grown exponentially while print book and ebook sales have declined. Anyone who has tried the latest speech recognition software knows that we have already passed the milestone where the vast majority of people find speaking to be more accurate and more efficient than writing or typing. This trend will continue and accelerate. It’s likely that typing will soon be as useful a skill as cursive penmanship (which is still being taught to school children!). The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn. ~ Alvin Toffler In the last three decades, the technological convergence between communication and computation has spread, sped up, blossmed, and evolved. Constant flux means everything is in the process of becoming, churning from “might” to “is.” We are moving away from the world of fixed nouns and towards a world of fluid verbs, and products are becoming services and processes. Embedded with high doses of technology, an audiobook might be an interactive experience, a contiously updated sequence of materials rapidly adapting to user feedback, competition, innovation, etc.; it might be a cultural platform packed with flexibility, customization, upgrades, connections, and new features. An audiobook might automatically explain words and concepts you don’t know, slow down when it senses that you are having trouble keeping up, and skip sections that you already know. A good audiobook experience will evolve to become a deep yet effortless conversation with an expert rather than passive consumtion. A book is no longer a finished product, but an endless process of reimagining your experience that morphs as you “read” or listen. “Booking” becomes a service rather than a noun, as liquid and open-ended as a Wikipedia page. Technologies might soon enable us to search and index much of the world’s repository of audio content, giving sounds a power that has kept text dominant in cultural life for so long. Interactive wearable audiobooks might even help the deaf hear stories. Maybe it’s time that we liberate literacy from paper and expand it far and beyond. The future requires “polymodal literacy,” a combination of visual, interactive, computational, and textual literacies. “Literacy” can encompasses multiple communication technologies. It includes legacy media like written text and visual communication. But it now extends to computational and interactive literacy. Using digital technologies like the Web requires familiarity with interactive models, while understanding how those technologies operate requires familiarity with computational processes and structures. — postliteracy.org In 2050, will we have any need for print literacy? Will someone who conquers reading and writing be better off than someone who doesn’t but can actively listen and speak with purpose and clarity? More importantly, will a master of print literacy be better off than their equivalent selves had they allocated their time and brain cells differently? What else could children achieve if they could dedicate an extra 3,600 hours of learning time to visual-spacial tasks such as contemplating Banach spaces, quantum retrocausality, or Mersenne primes? And what new abilities might we make room for in their open and plastic minds if we stopped drafting billions of neurons into service as ill-performing text-to-speech engines? In the future, we will all live in Star Wars.
https://medium.com/swlh/in-the-future-we-will-all-live-in-star-wars-dac8a6670a94
['The Quantified Vc']
2019-10-22 05:32:43.618000+00:00
['Future', 'Culture', 'Books', 'Technology', 'Startup']
858
THE TOP 5 BENEFITS OF 3D PRINTING IN EDUCATION
3D printing starts with a digital model stored in the 3D CAD (Computer Aided Design) file, and then producing the physical 3D object. A model is scanned as well as an old scan the object is used. The scan can be processed using a specific piece software referred to as”slicer. “slicer.” Slicer transforms the model into layers of thin, 2-dimensional layers . It then creates an instruction file (G-code) specific to the particular type of 3D printer. The kind of 3D printer commonly used in schools is the FDM (Fused Deposition Modeling) machine. The 3D printer makes the necessary mix of raw materials (plastic rubber, metal, and similar) and then creates the model by adding layers one at a time, two-dimensional layer after 2D layer, until the object is fully constructed and finished in accordance with the design guidelines from the initial CAD document. In the case of 3D printing service in education it’s about taking objects off the computer screen into the physical world. It’s also in the hands of the students to be examined, analysis, and various other activities which benefit by physical manipulating. Here are a few examples of how teachers and students could benefit from 3D printers for classroom use: History students can print out historic artifacts for them to study Graphic Design students can make 3D prints of their work Geography students can print topography or demographic, as well as populations maps Students studying Chemistry can print three-dimensional models of molecules Biology students can print out viruses, cells organs, and various biological artifacts Math students can print 3D models of the problems they need to be able to solve Here are a few methods 3D printing technology can bridge that gap in between real and digital worlds. Look up the information you require on the screen, and then print it to life. It’s mind-boggling It’s not it? With the price of 3D printers getting cheaper, they’re not just another tech tool for students to play with, but they are now a vital and effective tool for education. They help make teaching and learning easier. 3D printing in india is one of the tools that aids students in conceptualizing and visualize their ideas as they design their projects starting from the initial stages of sketching to the final result. The 5 major benefits of 3D printing in education From the point of view of growth and development future engineers, designers and artists will all have been students who were directly impacted through online 3D printing india Take a look at these five advantages from 3D printing’s influence on the education system… Brings Joy - 3D printing in india gives students the chance to experience their designs from the initial stage right through to the creating the model. This brings excitement and an understanding of the process of design when they experience of the design process from beginning to end. The various features are viewed more easily as students build the design layer by layer. It also gives students the possibility of exploring aspects in the real world rather than just on a computer screen or in the text. 3D printing brings the realm of theoretical thinking into the real world, which students can view and feel, opening new opportunities for learning and engaging in activities. 3D printing in india gives students the chance to experience their designs from the initial stage right through to the creating the model. This brings excitement and an understanding of the process of design when they experience of the design process from beginning to end. The various features are viewed more easily as students build the design layer by layer. It also gives students the possibility of exploring aspects in the real world rather than just on a computer screen or in the text. 3D printing brings the realm of theoretical thinking into the real world, which students can view and feel, opening new opportunities for learning and engaging in activities. Enhances the Curriculum — No whatever curriculum is being utilized 3D printing could assist teachers and students to work more effectively. 3D printing quote can help students move away from being inactive consumers of information on a display with little thought to productivity. Contrary to conventional classrooms where students can easily become bored, they are now engaged and active participants in the creation, design and implementation of their ideas and interaction through the printer as well as the instructor. — No whatever curriculum is being utilized 3D printing could assist teachers and students to work more effectively. 3D printing quote can help students move away from being inactive consumers of information on a display with little thought to productivity. Contrary to conventional classrooms where students can easily become bored, they are now engaged and active participants in the creation, design and implementation of their ideas and interaction through the printer as well as the instructor. Provides Access to Knowledge previously unavailable - Because the majority of 3D printers come pre-assembled and can be used with plug-and-play, it’s an enjoyable cutting-edge technology that allows students to understand. Students discover that it’s completely acceptable to not succeed on the first attempt and to attempt again to get better. When students realize that failing is part of the learning process and they are less hesitant to try and implement innovative and new ideas in everyday life. This boosts confidence of students and teachers appreciate the benefits of self-motivated, confident students. Because the majority of 3D printers come pre-assembled and can be used with plug-and-play, it’s an enjoyable cutting-edge technology that allows students to understand. Students discover that it’s completely acceptable to not succeed on the first attempt and to attempt again to get better. When students realize that failing is part of the learning process and they are less hesitant to try and implement innovative and new ideas in everyday life. This boosts confidence of students and teachers appreciate the benefits of self-motivated, confident students. opens new possibilities to Learn — A 3D printer that is affordable provides endless possibilities for learning for students. 3D printing gives students opportunities to play with concepts, while also expanding their creative abilities. It’s difficult for young children to think things through without the aid of visualization. Visual learning environments enhance their comprehension of the world and is capable of touching and seeing their creations. 3D printers provide new avenues for teaching information to students efficiently and cost-effective way. — A 3D printer that is affordable provides endless possibilities for learning for students. 3D printing gives students opportunities to play with concepts, while also expanding their creative abilities. It’s difficult for young children to think things through without the aid of visualization. Visual learning environments enhance their comprehension of the world and is capable of touching and seeing their creations. 3D printers provide new avenues for teaching information to students efficiently and cost-effective way. Improves problem-solving abilities — A 3D printer can provide numerous learning experiences for students. They must learn the different ways 3D printers function and how they operate as well as how to solve and troubleshoot issues. This is a subject which a lot of students don’t encounter in the course of their regular study. When they learn to identify and resolve 3D printer issues students develop determination and endurance when it comes to overcoming challenges. This will help students tackle their own issues in everyday life too. Instilling in students the ability to be creative can encourage a desire for innovation and creativity that could later be utilized in business. 3D printing helps students achieve their goals and prepares them for college. It helps them build confidence that will allow them to take on challenging courses like those in STEAM-related fields. When students discover and expand their imagination, they are able to develop imagination and creativity. Students create their own unique 3D designs which can assist in training others and also solve problems.
https://medium.com/@makenica/the-top-5-benefits-of-3d-printing-in-education-6913189cac1c
[]
2021-12-23 10:07:50.669000+00:00
['Education Technology', 'Education', '3d Printing Service', '3d Printing Market', '3D Printing']
859
A Beginner’s Guide to Hadoop’s Fundamentals
Literally, Hadoop was the name of a toy elephant — specifically, the toy elephant of Doug Cutting’s (Hadoop’s co-founder) son. But you’re not here to learn how, or from where, Hadoop got its name! Broadly speaking, Hadoop is a general-purpose, operating system-like platform for parallel computing. I am sure I do not need to mention the severe limitations of a single system when it comes to processing all the big data floating around us — it is simply beyond the processing capacity of a single machine. Hadoop provides a framework to process this big data through parallel processing, similar to what supercomputers are used for. But why can’t we utilize supercomputers to parallelize the processing of big data: There is no standardized operating system (or an operating system like-framework) for supercomputers — making them less accessible to small and mid-sized organizations High cost of both the initial purchase and regular maintenance Hardware support is tied to a specific vendor, i.e., a company cannot procure the various individual components from different vendors and stack them together In most cases, custom software needs to be developed to operate a supercomputer based on the specific use case Not easy to scale horizontally Hadoop comes to the rescue as it takes care of all the above limitations: it’s an open-source (with strong community support and regular updates), operating system-like platform for parallel processing that does not rely on specific hardware vendors for ongoing hardware support (works with commodity hardware) and does not require any proprietary software. There have been three stable releases of Hadoop since 2006: Hadoop 1, Hadoop 2, and Hadoop 3. Let’s now look at Hadoop’s architecture in more detail — I will start with Hadoop 1, which will make it easier for us to understand Hadoop 2’s architecture later on. I will also assume some basic familiarity with the following terms: commodity hardware, cluster & cluster node, distributed system, and hot standby. Hadoop 1 Architecture Following are the major physical components of the Hadoop 1 architecture: Master Nodes: Name Node: Hadoop’s centralized file system manager, that keeps track of the number of blocks a data file was broken into, the block size, and which data nodes will save and process each file block — without saving any data within itself Secondary Name Node: Backup for the Name Node, but not on hot standby Job Tracker: Hadoop’s centralized job scheduler that is responsible for scheduling the execution of a job on data nodes Each of the above nodes represents an individual machine in a production environment, working in the master mode, that are usually placed in different racks in a production setup (to avoid the failure of one rack bringing down multiple master nodes). Slave Nodes: Data Nodes: Individual machines/systems where the working files, in the form of data blocks, are stored and processed upon Task Tracker: A software service to monitor the state of the Job Tracker, keep track of activities being performed by the Data Node, and report the status to the Job Tracker. One Task Tracker for each Data Node The slave nodes cannot function without the master nodes and are fully dependant on the instructions that they receive from the master nodes before undertaking any kind of processing activities. To ensure continuous uptime, slave nodes send a heartbeat signal to the name node once every three seconds to confirm that they’re up and active. All the above master and slave nodes are interconnected through networking infrastructure to each other to form a cluster. In terms of processing capacities, Job Tracker is more powerful than the Name and Secondary Name Nodes with neither requiring substantial storage capacity. However, the data nodes are the most powerful machines within the cluster with substantial RAM and processing capabilities. Deployment Modes Following are the three primary deployment or configuration modes supported by Hadoop: Standalone Mode: All Hadoop services (i.e., each of the Name Node, Secondary Name Node, Job Tracker, and Data Nodes) run locally on a single machine within a single Java Virtual Machine (JVM). However, the standalone mode is seldom used nowadays Pseudo Distributed Mode: All Hadoop services run locally on a single machine but within different JVMs. Pseudo Distributed Mode is usually used during development and testing activities and for educational purposes Fully Distributed Mode: Used in a production setup, where all Hadoop services run on separate and dedicated machines/servers What is a Job in Hadoop? A Job in the Hadoop ecosystem is analogous to a Python script/program that one can execute in order to perform a certain task(s). Just like a Python script, Hadoop’s job is a program(s), typically as a JAR file, that is submitted to the Hadoop cluster in order to be processed and executed on the input (raw) data that resides on the data nodes and the post-processed output is saved at a specified location. Hadoop’s Software Components Now let’s move on from Hadoop’s physical infrastructure to its software components. The core software components of Hadoop are: Hadoop Distributed File System (HDFS) used for data storage and retrieval MapReduce, a parallel processing Java-based framework, is Hadoop’s programming arm that processes the data made available by the HDFS MapReduce is further comprised of: A user-defined Map phase, which performs parallel processing of the input data A user-defined Reduce phase, that aggregates the output of the Map phase Just to be clear, Hadoop is a parallel processing platform providing the hardware and software tools to allow parallel processing) that then makes available the MapReduce framework (i.e., a bare-bone skeleton that can be customized based on the user requirements) for parallel processing. But MapReduce is not the only framework supported by Hadoop — Spark is another. Hadoop Distributed File System (HDFS) HDFS is the file-management component of the Hadoop ecosystem that is responsible for storing and keeping track of large data sets (both structured and unstructured data) across the various data nodes. In order to understand the working of HDFS, let consider an input file of size 200MB. As explained earlier, in order to facilitate parallel processing on data nodes, this single file will be broken down into multiple blocks and saved on the data nodes. The default split size (that is a global setting and can be configured by the Hadoop administrator) in HDFS is 64MB. Therefore, our sample input file of 200MB will be split into 4 blocks — where 3 blocks will be of 64MB and the 4th block will be 8MB. The splitting of the input file into individual blocks and saving them on specific data nodes is taken care of by HDFS. One critical aspect to take note of here is that the splitting of the input file by HDFS happens on the client machine that is outside the Hadoop cluster and the name node decides the placement of each data block into the specific data nodes, based on a specific algorithm. So the client machine directly writes the data blocks to the data nodes once the name node has provided it with the block placement strategy. The name node, acting as the Table of Contents of a book, remembers the placement of each data block within the various name nodes, together with other information, e.g., block size, hostname, etc., in a file table called the File System Image (FS Image). Failure Management of Data Nodes So what happens in case of failure of a data node? The failure of even one data node will result in the entire input file being corrupted — since one piece of our puzzle has gone missing! In a typical production setup, where we are usually dealing with data blocks of hundreds of gigabytes, it is highly inefficient and time-consuming to push the original data file back into the Hadoop cluster. To avoid any potential data loss, backup copies of data blocks on each data node is kept on an adjacent data node. The number of backup copies to be made of each data block is controlled by the Replication Factor. By default, the replication factor is set at 3, i.e., every block of data on each data node is saved on 2 additional backup data nodes so that the Hadoop cluster will have 3 copies of each data block. This replication factor can be configured on a per-file basis at the time of pushing the source data file into HDFS. The backup data node will kick-in as soon as any data node fails to send a heartbeat signal to the name node. Once a backup data node is up and running, the name node will initiate another backup of the data block so that the replication factor of 3 holds throughout the cluster. Secondary Name Node In Hadoop 1.0, the Secondary Name Node acts as a backup of the Name Node’s FS image. However, and this was one of Hadoop 1.0’s primary limitations, the Secondary Name Node does not operate in a hot standby mode. Therefore, in the event of the Name Node’s failure, the entire Hadoop cluster will go down (data will be present in the Data Nodes, however, it will be inaccessible since the cluster has lost the FS image), and the contents of the Secondary Name Node need to be manually copied to the Name Nome. We will go over this later — but this limation was addressed with the release of Hadoop 2.0, where the Secondary Name Node acts as a Hot Standby. Hadoop 2.0 Hadoop 2.0 is also sometimes known as MapReduce 2 (MR2) or Yet Another Resource Negotiator (YARN). Let’s try to understand the salient architectural differences between Hadoop 1.0 and Hadoop 2.0. Remember that, in Hadoop 1.0, the Job Tracker acts as a centralized job scheduler that splits up a specific job into multiple jobs before passing them on to individual data nodes — where the individual tasks on the data nodes are monitored by the Task Tracker that then reports back the status to the Job Tracker. In addition to its job scheduling responsibilities, the Job Tracker also allocates the system resources to each data node in a static mode (that is, the system resources are not dynamic). Hadoop 2.0 replaces the Job Tracker with YARN while the underlying file system remaining as HDFS. In addition to MapReduce, YARN also supports other parallel processing frameworks, e.g., Spark. YARN can also support up to 10,000 data notes, compared to only 4,000 data nodes supported by Hadoop 1.0’s Job Tracker. YARN has 2 components: Scheduler and Applications Manager. Both these tasks were managed single-handedly by the Job Tracker in Hadoop 1.0. Separating these distinct responsibilities into YARN’s individual components allows better utilization of system resources. Further, the task trackers on each data node were replaced by a single Node Manager (that works in the slave mode) in Hadoop 2.0. Node Manager communicates directly with YARN’s Applications Manager for resource management. As alluded to earlier, in addition to a Secondary Name Node, Hadoop 2.0 also has a Hot Standby Name Node that seamlessly kicks-in in case of Name Node’s failure. The Secondary Name Node comes in handy in case of the failure of both the Name and the Hot Standby Name Node. What is MapReduce? As the name suggests, MapReduce is comprised of the following 2 stages with each stage having 3 further sub-stages: Map stage All 3 sub-stages of the Map stage are performed or acted upon in each of the data blocks residing in the individual data nodes — this is where parallelization kicks-in within Hadoop. Record Reader The Record Reader is pre-programmed to process one line at a time from the input file and produces 2 outputs: Key: a number Value: the entire line Mapper Mapper is programmable to process each key-value pair output from the Record Reader one at a time based on any required logic or the problem statement. It outputs additional Key-Value pairs based on a user-defined function it was programmed to perform. Sorter The output from Mapper is fed into the Sorter that lexicographically sorts (obviously! 😊) the keys from the Mapper’s output. In case the keys are numeric, then the Sorter will perform a numerical sorting. The Sorter is pre-programmed and the only configuration possible is to implement sorting on values. Reduce stage At the end of the Map stage, we will have multiple Mapper outputs, one from each of the data nodes. All these outputs will be transferred to a separate, single data node where the Reduce operation will be implemented on them. The 3 sub-stages of the Reduce operation are: Merge Intermediary outputs from each Map operation are appended to one another to result in a single merged file. Shuffler Shuffler is another pre-programmed built-in module that aggregates together the duplicate keys as present in its input, resulting in a list of values for each unique key. Reducer Shuffler’s output is fed to the Reducer, which is the programmable module of the Reduce stage — similar to the Mapper. Reducer produces output in key-value pairs based on what it is programmed to perform as per the problem statement. Practical Example I will use a very simple, non-ML problem statement to try and explain the mechanics and the workflow of MapReduce. Consider an input file with just 2 statements as follows: Processing big data through Hadoop is easy Hadoop is not the only big data processing platform Our task is to find the frequency of words in the input file, the expected output being: Processing 2 big 2 data 2 through 1 Hadoop 2 is 2 easy 1 not 1 the 1 only 1 platform 1 Going through the MapReduce stages explained above: the output of Record Reader after reading the first line will be: Key: 0 (file/line offset — the starting position) Value: Processing big data through Hadoop is easy Key: 0 (file/line offset — the starting position) Value: Processing big data through Hadoop is easy We can program the Mapper to do the following: Step 1: Ignore the input key Step 2: Extract each word from the line (tokenization) Step 3: Produce the output in key-value pairs where the key is each word of the line and value is the frequency of that word in the input line Accordingly, Mapper’s output after processing both lines will be something like this: Processing 1 big 1 data 1 through 1 Hadoop 1 is 1 easy 1 Hadoop 1 is 1 not 1 the 1 only 1 big 1 data 1 processing 1 platform 1 The output from the Sorter will be something like this: big 1 big 1 data 1 data 1 easy 1 Hadoop 1 Hadoop 1 is 1 is 1 not 1 only 1 platform 1 processing 1 Processing 1 the 1 through 1 Shuffler’s output will be something like this: big 1, 1 data 1, 1 easy 1 Hadoop 1, 1 is 1, 1 not 1 only 1 platform 1 processing 1, 1 the 1 through 1 Reducer can be programmed to do the following: Step 1: Take the key-value pair from Shuffler’s output Step 2: Add up the list values for each key Step 3: Output the key-value pairs where the key remains unchanged and the value is the sum of numbers in the list from Shuffler’s output Step 4: Repeat the above steps for each key-value pair received from the Shuffler Accordingly, Reducer’s output will be: big 2 data 2 easy 1 Hadoop 2 is 2 not 1 only 1 platform 1 processing 2 the 1 through 1 Use Cases of MapReduce Certain industry use case of MapReduce include: searching for keywords in big datasets Google used it for wordcount, Adwords, page rank, indexing data for Google Search, article clustering for Google News (recently Google has moved on from MapReduce) Text algorithms such as grep, text-indexing, reversing indexing Data mining Facebook uses it for data mining, ad optimization, spam detection Analytics by financial services providers Batch, non-interactive analysis Conclusion Right, so this was a very high-level and non-technical introduction to the world of Hadoop and MapReduce. Obviously, there are several other Hadoop components that I have not even touched upon here, e.g., Hive, Zookeeper, Pig, HBase, Spark, etc. Please feel free to reach out to me if you want to discuss the above content, or that in any of my previous posts, or anything in general related to data analytics, machine learning, and financial risk. Till next time, rock on!
https://towardsdatascience.com/a-beginners-guide-to-hadoop-s-fundamentals-8e9b19744e30
['Asad Mumtaz']
2021-01-04 12:59:10.250000+00:00
['Hadoop', 'Big Data', 'Data Analytics', 'Data', 'Technology']
860
2020’s Best Wireless Gaming Headset
2020’s Best Wireless Gaming Headset Photo taken by the author. The wireless sector is where most of the action and excitement is in gaming headsets today. Gamers and peripheral companies are both engaged in an aggressive campaign to get rid of all wires without sacrificing performance, and with the launch of new consoles this year, there’s no sign of that trend slowing down. Numerous brands released completely new wireless headsets in 2020 to try and win this lucrative market…but for me, the clear winner in this field is the HyperX Cloud II Wireless. When the headset was first announced, I was skeptical. Was HyperX really going to jam wireless parts into an older model and call it good? After testing it out, my fears were unfounded and silly. This is nowhere near a hasty refresh. In fact, it’s a truly new headset, sharing only a name and a few small details with its predecessor. It has a new industrial design with an improved sound signature, and its comfort, battery life, and performance rival the premium offerings from any other company. Photo taken by the author. The HyperX Cloud II Wireless sells for $149.99 (official site here). That’s not an affiliate link, because I think those have no place in reviews. It comes in the classic HyperX red and black color scheme, and it is mainly targeted at PC gamers, though it also works with PlayStation and Nintendo Switch consoles. On PC, you’ll get access to HyperX’s latest 7.1 surround virtualization system with full support for 7.1 audio input from games, and you can use the Ngenuity software to tweak your settings and see your remaining battery. The headset comes with a detachable noise canceling microphone, and features a USB-C charge port with the necessary cable. With a 30-hour battery life and light 300g weight combined with HyperX’s famous memory foam cushions, it’s a perfect choice for marathon gaming, working, or listening sessions. It’s lighter than many of the wired headsets I reviewed this year, and although it lacks RGB lighting, its long battery life more than makes up for that. Wireless performance is also surprisingly good. I test wireless signal strength by connecting to the PC in my home office at one end of my apartment, then walking around the building. The Cloud II Wireless is one of the few models I’ve tried where I had essentially full coverage across the run of my apartment, even with multiple walls in the way. Photo taken by the author. Sound quality is nigh-on peerless for the gaming space, with my favorite sound signature I’ve ever heard in a HyperX product. It’s clean, balanced, accurate, and powerful. The sound rivals flagship audio products that cost twice as much, and like its wired predecessor it once again proves that you don’t have to spend ludicrous amounts of money to get good audio. The sound is tuned to hit a sweet spot of performance that should please just about everyone. The internal amplifier is more than capable of driving the headset to high volumes, and the sound doesn’t have any obvious digital noise issues. The lack of a wired backup connection or Bluetooth support means this doesn’t work with Xbox consoles or mobile phones, though HyperX has a history of supporting those platforms and I wouldn’t be shocked if new revisions of this design come out in the future. If you’d like to read more of my thoughts on this headset, check out my full review. I spent a couple of weeks throwing every test I could think of at this headset, and trying to think of things about it I didn’t like…and all I could come up with is the lack of a wired backup connection, and that it doesn’t come with a carrying bag like some other HyperX models. It continued to impress me no matter what I tried, and in the last two months of continued regular use, I still have no genuine complaints. It’s rare that I can say that about a headset. Photo taken by the author. This is easily the best wireless headset I’ve tried this year, and one of the best gaming headsets I’ve ever used. It’s the new standard by which I’ll measure $150 gaming audio products going forward, and it’s a great choice for general listening, gaming, working from home, and any other audio needs you might have as long as you own a compatible platform. I hope that this is a sign of things to come from HyperX, because this is an exceptional headset that delivers everything you could ask for at this price point. It’s a true return to form, and a standout product in the company’s large product lineup. I feel silly for ever being skeptical about it, and I can’t wait to see what more comes from this newly designed platform going forward.
https://medium.com/@xander51/2020s-best-wireless-gaming-headset-8055277de87
['Alex Rowe']
2020-12-03 17:58:11.128000+00:00
['Gadgets', 'Tech', 'Gaming', 'Music', 'Technology']
861
In Praise of Dumb Tech
In Praise of Dumb Tech Photo: Thomas Kolnowski/Unsplash The world of design has produced some life-changing inventions across a plethora of industries and sectors, with our lives being improved in previously unimaginable ways. At the same time, some companies continue to make pointless iterations and updates to existing products in the hope of designing the next big thing — whether it makes sense or not. For a while now, the current trend has been to make everything “smart.” And I mean everything. If an object exists, you can bet your life someone is attempting to make it smarter. While some companies get this fusion of object and internet connectivity right and produce incredible results, the trend also leads to a whole host of pointless and expensive gadgets that might be worse than their normal alternatives. Imagine the scenario. You arrive at your front door, about to put your code into your smart lock when you realize the door is already open. Strange. It then dawns upon you that your lock has been hacked, and your house ransacked. If you’re one of the luckier ones, the technology fastened to your door may have only given your Wi-Fi password away, or locked you out entirely. Do locksmiths know how to unlock smart locks? After you’re finally able to get into your house, you decide to make yourself a snack with your Revolution R180, heralded as “the world’s first 2-slice, high-speed smart toaster.” This gadget saves you all the hassle of using dials by letting you mash your crumby fingers into a touch screen while saving you the trouble of lowering the toast with its auto lowering and lifting mechanism. It comes at a completely reasonable $300. Sweet relief. How did we survive without this? Like a lot of people, you like a cup of tea with your toast. But the kettle is on the other side of the kitchen, and you just sat down. If only there was a solution… Enter the smart kettle, created by companies who believe that despite the fact that humans have been boiling water for thousands of years, internet connectivity can make it easier. Its biggest selling point? You can boil water from anywhere. Your life will never be the same after you’ve boiled your kettle from the living room. You go to your “smart fridge” to grab some milk. You don’t need to panic about whether there’s milk in the fridge because through the magic of the internet (and the three cameras placed inside), you can see through the fridge door. You’ll never have to open the fridge door again — until you have to retrieve your milk, obviously. If an object exists, you can bet your life someone is attempting to make it smarter. It also comes with a tablet fixed to the front of it that allows you to leave notes for other people, make a shared online grocery list, see recipes, stream music, and play videos. The problem is that all of this would be more easily performed on a tablet that wasn’t literally attached to your fridge; a tablet that would also cost far less than the $5000 smart fridge. You may be starting to realize this kind of smart appliance doesn’t exist because it’s useful — it merely exists because the idea is possible, and it could be potentially profitable. However, you can reorder groceries directly from it. Handy, right? After you’re done screaming at your fridge to order more milk like some kind of deranged person, you fancy some eggs. But, what if the eggs are off? Well, thanks to Quirky’s Egg Minder, you have an internet-connected egg holder that can tell you if your eggs are still fresh. You’ll never have to go through the pain of looking at the box label or dropping them in the cup of water ever again. It will even send a notification when you’re in danger of becoming eggless, in case your eyes stop working. When you fancy making yourself something to drink, you look to your Juicero — quite possibly the worst smart home product invented to date. The machine can produce you the perfect juice, unless your Wi-Fi just went down. You should really get round to sending it back, after somebody discovered it was an entirely pointless concept because they could just use their hands to squeeze out the contents of the sachet. And last time I checked, hands cost $0, not $400 (reduced from the launch price of $700). A notification pops up on your phone, prompting you to check the health of your cats. Where would you be without your LuluPet AI Smart Cat Litter Box Pet Health Monitor? This poop-examining litter box is kind enough to send you daily updates about your cat’s latest business (literally). It sure makes for interesting coffee talk. Another honorable mention comes at the dinner table, with the My Smalt salt shaker. (Yes, “Smalt” is smart and salt combined. Very clever.) Branded as a “centerpiece” and resembling a rejected companion robot from Star Wars, the Smalt can play music, offer mood lighting, be controlled by voice or phone, and deliver you the perfect pinch of salt. It can’t however, actually grind salt. As the day draws to a close, you go to the bathroom. Here, you can look into your Savvy SmartMirror by Electric Mirror, who decided it would be simpler for you to spend $2,000–6,000 on a mirror that can tell you the weather while you brush your teeth than to glance down at your already expensive smartphone. Finally, you sit on your Kohler Numi smart toilet. Forget the internet of things, this is the internet of toilets. This product is like a parody of itself. Who knew you needed an “immersive experience” when you sat down on the loo? Not only does it provide LED mood lighting and music, but you can also control the temperature of the seat itself — perfect for those cold mornings. Best of all, it will only set you back $8,000, a steal for an “intelligent toilet.” The biggest problem with the “smart” world is that very few have figured out how to build products that actually do anything useful enough to justify their price tags. In many cases, adding complexity to once-simple devices is leading to all kinds of unforeseen problems, meaning that many smart products try to “do it all” and end up being not very good at any of it. The future might be smart, but at the moment, a lot of it sure is dumb.
https://debugger.medium.com/in-praise-of-dumb-tech-b5154d307713
['Stephen Moore']
2020-12-18 06:32:59.801000+00:00
['Gadgets', 'Tech', 'Smart Home', 'Digital Life', 'Technology']
862
Top 5 unusual mining techniques
- Hey girl, how did you buy such an expensive car? - Have been mining a lot… The cryptocurrency fever has gone away! Things will never be the same, when having a good graphics card and capable hands meant that you are a miner with rather high incomes that allowed you to buy an iPhone… Everything is different now. Mining is expensive today. Moreover, you should be tech-savvy and well informed if your goal is to make money. However, thanks to cryptocurrency freaks, the romance and hype about mining are not fading away. Top 5 unusual mining techniques Technique №1: for those ready to work up a sweat One very specific European institute that is remote from mining like David Duchovny from Grammy Award invented a mining method that uses heat produced by the human body. Scientists of The Hague University of Applied Sciences designed a special suit that transforms heat energy into electricity. As part of the experiment, around 40 people wearing special suits managed to mine 16,954 units of different cryptocurrencies (Bitcoin, Ripple, and Litecoin) within 212 hours. The core idea of this method is in tune with the notion of mining: the more you work and sweat, the more you earn. A life hack for footballers: team members can put on crypto suits and by doing so increase their efficiency. Moreover, they will get a real chance to improve ROI! J That is an ideal option, as there is no need to score goals. The most important thing is just to run! Technique №2: tesling To put it bluntly, this technique is very exclusive, but if you have Тesla S, it is an ideal match for you. Yes, I know, this car is insanely expensive, but maybe you are a young crypto millionaire, or a sun of some politician, or saved money from your scholarship allowance. So, it is easy as pie: you buy Tesla S for tens of thousands of dollars, put a mining farm into the trunk compartment and here you go — you have a car with ‘green pricing’. Owners of such car farms state that the mined cryptocurrency covers car expenses. As a result, you have a completely autonomous car, make Bitcoins by driving, and do not spend money on petrol, saving your funds! Technique №3: bicycle mining Everyone knows that European people are crazy about bicycles. They ride bicycles to work, travel to other towns… Isn’t that amazing? The British company Toba releases bicycles that will mine cryptocurrency while you ride. Bicycle owners will be able to monitor the process of mining using a special program uploaded to a smartphone. Private keys that confirm ownership of coins will be stored on a device attached to the bicycle. The reward for every 1600 km will be £20 ($26.56). If you are tired from the noisy city and want to visit your granny in the village, pick up mushrooms or cowberries, take your bicycle and ride it there. Then, when you come back home, you will check your account and find out that you mined $200. Nice! Technique №4: tooth mining You clean your teeth twice a day and do not use it for earning cryptocurrency yet? You are wasting your time! The Chinese company 32 Teeth is working on a high tech toothbrush that will allow users to mine cryptocurrency. This hygienic device will have many functions acting like a mirror, alarm clock, toothbrush, and even a home dentist. The device will offer 16 modes. It will record and analyze the process of tooth brushing, remind users to clean teeth in the morning and evening. But its main distinctive feature will be the possibility of mining! While brushing their teeth, owners of the device will earn AYA tokens that will be exchangeable for new toothbrushes, toothpaste, and dental services. Technique №5: stalker’s mining Nothing ventured, nothing gained S.T.A.L.K.E.R Perhaps, the most profitable mining method. The recipe is simple: Take one secret nuclear base in Sarov town, add one supercomputer and some closed territory. Connect a powerful cooling system and add several artful nuclear experts. That’s it! The most profitable mining method with a criminal flavor is ready! It is a real case and not the fruit of the author’s sick imagination. Adventurous nuclear experts that worked at the federal nuclear center decided to make money using the secret project (a supercomputer). Why not? It has enough power; the territory is guarded and protected against strangers and tourists. The place is so secret that it cannot be found on the map. Scientists understood that and started mining… They received excellent results, as the machine provided power that allowed making from $500 to $700 per day. Unfortunately, there was no happy end. When the inspired physicians were working in the laboratory and planning where they would spend money, people in black suits were coming to their place… Overall, mining is a profitable activity, but you will not be able to start without initial investments and knowledge. To gain success, you need good theoretical training, financial resources, and some luck.
https://medium.com/smile-expo/top-5-unusual-mining-techniques-4f6c983af486
[]
2018-08-17 10:11:26.063000+00:00
['Bitcoin', 'Mining', 'Blockchain', 'Technology', 'Bitcoin Mining']
863
This Tiny Wearable Device Uses Your Body Heat to Chrage Electronic Devices
This Tiny Wearable Device Uses Your Body Heat to Chrage Electronic Devices vishal shrivastava Mar 1·3 min read Over the past few years, we have seen a massive boom in the development of wearable technologies. Researchers and companies have come up with unique wearable devices. For instance, we saw Sony launch a wearable AC to beat the summer heat last year. And just recently, we saw a wearable that can inform an employer about his/ her employees’ moods and mental state. Now, researchers have come up with a tiny wearable device that can essentially turn a human body into a biological battery to juice up electronic devices. Researchers from the University of Colorado Boulder developed a material that attaches to a human body in the form of a bracelet or a ring and converts body heat into electrical power. The researchers initiated their project based on the question — Can they convert the body heat of humans into electrical energy to power electronic devices? Turns out, they can do that using something called Thermoelectric Generators (TEGs). TEGs are these small conversion catalysts that can turn the body heat of humans into electrical energy. These solid-state devices perform the task via a phenomenon dubbed the Seebeck effect. Now, to put these on human skin, the researchers needed them to be stretchable and, more importantly flexible. So, using the TEGs, the research team came up with an appropriate device that a user can wear as a bracelet or a very techie ring. This device, which is termed a Soft Motherboard-rigid plugin module (SOM-RIP), will attach to your skin and convert your body-heat into electrical energy to power small electronic devices. “In the future, we want to be able to power your wearable electronics without having to include a battery,” said the senior author of the research paper, Jianliang Xiao. Now, although the device is currently in its ideation and concept stage, the researchers think that it has a lot of potential going into the future. As of now, the concept devices are able to generate around 1 volt of energy from each square centimeter of the human skin. This, as per the researchers, can power small electronic devices such as a smartwatch or a fitness tracker. Apart from converting body heat into electrical energy, Xiao’s TEG-based device can also heal itself when damaged and is completely recyclable. The team even made the device customizable as users will be able to connect smaller units of the device with each other to create a large structure with more power. Xiao compares this feature to the popular children’s toy Lego as it is like “putting together a bunch of small Lego pieces to make a large structure”. The concept is currently in development at CU Boulder’s Paul M. Rady Department of Mechanical Engineering. The researchers shared the research paper on Science Advances recently. You can also check out an official video explainer for the tech right below. Now, although it is in its very early stages, the team expects that their device would be commercially available in the market in the next 5–10 years. This way, we will not be needing any charging adapter or a wireless charger to juice up our electronic devices.
https://medium.com/@vishalshrivastava0000/this-tiny-wearable-device-uses-your-body-heat-to-chrage-electronic-devices-dfbf21805c83
['Vishal Shrivastava']
2021-03-01 03:42:36.636000+00:00
['Wearable Device', 'Charger', 'Wearable Technology']
864
What is Compound (COMP)? — All about the DeFi Lending provider
Decentralized Finance (DeFi) has become a household name for everyone this year at the latest, thanks to Compound. The lending protocol got off to a flying start with the launch of its own governance token COMP, rising several 100% in a concise period of time to rank #1 among all DeFi tokens. The dominance of MakerDAO seems to be over for now, but what is actually behind the project? How can you earn money with Compound? How do users achieve returns of 100% and more? What is behind the DeFi app, the COMP token, and the dangers of being aware? We provide you with answers to the most important questions about the DeFi and Lending Highflyer in our “What is Compound” knowledge article. What is Compound — explained The compound is a DeFi protocol that runs on the Ethereum Blockchain using smart contracts. The principle is explained, as the focus of the project is on lending and borrowing cryptocurrencies. Behind it is nothing more than lending and borrowing Ethereum-based tokens (ERC20). To lend his tokens (Lending / giving loans), the user receives interest and can also earn COMP tokens. Lending (borrowing) cryptos are also rewarded with COMP tokens, but more on that later. The interest rates are variable for both sides and are based on the supply and demand of the deposited cryptocurrency. The advantage: Compound runs 24/7 and is therefore always available, unlike a bank. Besides, the deposited cryptocurrencies can be paid out at any time, just as a loan can be repaid. So, in summary, Compound is a liquidity pool for loans on the Ethereum Blockchain, where users are motivated to use the protocol through interest and issuance of the COMP token. The history and brains behind the project Compound Finance is a startup based in San Francisco. The project triggered a lot of hype on DeFi this year. However, the fact that the COMP launch was probably such a success is no mere coincidence. After all, in 2018 and 2019, the project raised a total of USD 33.2 million through funding rounds, so that the founders and developers were already provided with sufficient liquidity for the actual development work. However, the investors themselves are also exciting: Andressen Horowitz, Polychain Capital, Coinbase Ventures, and Bain Capital Ventures invested in the young startup. Coinbase, in particular, triggered the run on the tokens with its subsequent COMP listing. The brains behind the company are CEO Robert Leshner and CTO Geoffrey Hayes, who have already jointly and successfully built companies and bring experience from the financial industry. How does Compound work? As explained in part above, Compound works on a simple principle that we know from banking: Because similar to a bank, I can deposit my money (= cryptocurrencies) in my account and receive an annual interest rate for it. The principle can thus be compared to a checking or call money account, although the interest rates are significantly more attractive in the case of Compound. However, the big difference to a bank is the custody because the company Compound Finance does not manage any cryptocurrencies. Everything takes place via smart contracts on the Ethereum blockchain. This eliminates the middleman, and the control and responsibility (!) over the assets remains with the user. This is where the term “Decentralized Finance” (DeFi) comes from. Because DeFi protocols often map applications from the “real” or centralized financial world onto the blockchain, making them accessible to everyone. Compound focuses on lending, which we refer to in English as lending and borrowing. But before we look at how you can make money with Compound, we need to understand how the protocol works and what role the COMP token plays in it. The COMP Token and Governance The COMP Token plays a central role in the Compound ecosystem and forms the core of the decentralized autonomous organization (DAO) that governs the protocol. Each COMP Token reflects voting right in the organization, which decides the future of Compound in proposals. The principle of 1 COMP = 1 vote applies. Whoever therefore combines many COMP and thus many voting rights, decides, for example, on: Interest rates Collaterals (security, the minimum amount of the deposited credit sum) Admins COMP distribution and many other parameters and variables of the protocol Thus, it follows directly that all the above parameters are flexible and can be adjusted at any time by DAO’s change proposals. Proposals — decisions in the protocol To propose, i.e., a change proposal for Compound, one must hold at least 1% of the COMP tokens in circulation. The proposal must then be voted on within 3 days. In doing so, each person with a vote (i.e., at least 1 COMP Token) can decide for or against the proposal. If at least 400,000 votes are cast for the proposal, the change will be implemented after another 2 days. IMPORTANT: A proposal is a direct change to the code and not a vote for an idea implemented by the Compound Team. Polychain Capital currently holds most COMP (voting rights). For an overview of all proposals and voting distribution, we recommend the Compound’s governance page. The COMP distribution In total, there will be 10 million COMP tokens, of which 4,229,949 COMP will be distributed to users over 4 years. It kicked off on June 15, 2020, with 0.5 COMP per Ethereum block (2,880 per day) flowing into the hands of users of the protocol since then. All active markets, weighted by their interest rate, receive a portion of this payout in the process. As mentioned above, both lenders and borrowers get shares of this distribution. Thereby there is an equal weighting (50:50). The principle of receiving not only the interest rate but also the tokens of a certain project is to yield farming. Graphically, this looks as follows: IMPORTANT: The other part of the 10 million COMP tokens belongs to investors and the team. These tokens are partially protected by lock-ups but could certainly put selling pressure on the price. This happened, for example, after the COMP listing on Coinbase. The team sold part of its own tokens. cTokens and interest on Compound We now know that you can earn interest for lending cryptocurrencies on Compound and even get COMP Tokens as a bonus. This certainly raises the question of how this process works in detail. We would now like to explain this question in the following. An important component for this are the so-called cTokens. cTokens are nothing more than a computing unit for the compound protocol. When a user provides cryptocurrency to the protocol, cTokens are used to keep track of the funds they lend and any interest earned. Each time a user provides funds to the lending pool, they receive a corresponding balance in cTokens. Each asset has its own cToken. For example, when a user lends DAI for the protocol, they receive a corresponding balance in cDAI. cTokens are ERC20 tokens, which means they can be viewed on Etherscan at any time and can also be stored in Ethereum wallets such as Metamask. It also makes them easy to use on various other platforms. In summary, cTokens securitize your share in the corresponding lending pool of a particular cryptocurrency and thus your right to interest. Since they represent the value of an underlying cryptocurrency, they belong to the category of derivatives. After all the theory, let’s illustrate this with a practical example. Here’s how cTokens work — An example Let’s say you send 1,000 DAI to the compound protocol when the exchange rate is 0.020070. You would then receive 49,825.61 cDAI (1,000/0.020070). A few months later, you decide it is time to withdraw the DAI from the protocol. The exchange rate is now 0.021591: your 49,825.61 cDAI is now equal to 1,075.78 DAI (49,825.61 * 0.021591). Thus, you have earned 75.78 DAI in interest. So, if you want to withdraw your DAI again, you redeem the amount of cDAI you want into DAI and transfer the DAI back to your Ethereum Wallet. How do I set up Compound? — The quick guide Let’s get to the exciting part that we know how Compound works and what role tokens play. So, let’s take a look at how you can earn money with Compound now in specific. Setting up Compound is actually very easy because that’s what makes the new DeFi protocols stand out. Connect Ethereum Wallet to Compound As mentioned earlier, the Compound protocol works on the Ethereum network. So, to get access to the dApp and its features, you need an Ethereum wallet that can communicate with Compound. For this, we recommend Ledger hardware wallet, Metamask (browser extension), or Argent (smartphone app), for example. Once you have the services set up and Ethereum or other ERC20 tokens on the wallet, you can get started right away. If you use Argent, everything runs through the app on your phone. With Metamask, you can use the browser extension and connect your wallet to the Compound page protocol. Lend or borrow cryptocurrencies In the next step, you can start earning interest or borrow cryptocurrencies. To do this, you have to decide whether you want to lend tokens (Supply Market) or borrow them (Borrow Market). Then select the cryptocurrency you want and follow the instructions in the browser or on the app. For our example, let’s assume that you want to lend DAI. Next, select the amount you want to deposit. Before you approve the loan, you will also see the expected interest per year. Keep in mind that this is only a snapshot. The interest rates vary and can be adjusted. You will then be redirected to your connected wallet, where you sign your transaction and transfer your tokens to the compound protocol. As described in the “cTokens” section, you will then receive your cDAI using DAI as an example. You can either claim the COMP tokens earned in addition to the interest yourself in the voting area or have them automatically paid out to your Ethereum wallet later with your withdrawal. What cryptocurrencies does Compound support? The compound, meanwhile, supports 8 cryptocurrencies that you can lend or borrow. Among them are Ethereum (ETH), of course, the stablecoins DAI (DAI), Tether (USDT), and USDC, as well as Augur (REP), 0x Token (ZRX), Basic Attention Token (BAT), and Bitcoin in the form of Wrapped Bitcoins (WBTC). Bonus: 100%+ Annual Interest through COMP Yield Farming For most of you, the focus is certainly on the yield that can be earned on Compound. DeFi protocols make it possible to earn a passive income on the cryptocurrencies held. These returns are often significantly higher than those from the traditional financial market. The background is not only the elimination of middlemen but, of course. Also, the enormous risks behind it, whether DeFi or classic financial market: An increased risk always accompanies an increased return. In the next section, we will go into more detail on this point. But first, let’s turn our attention to maximizing returns: The best way to achieve the maximum return with compound is to take advantage of the protocol. This is what we call yield farming. Because with Compound, in addition to the interest for lending cryptocurrency, you can also earn the COMP token. This increases the yield even more, especially when the token performs as outstandingly as it did at the start in June. However, it can get quite complicated because tricks can quickly increase the return from COMP and interest to 4–5 times. For example, one could deposit USDC and then borrow USDT. You then convert that to USDC and deposit it at Compound, use it there, and withdraw the USDT again. What are the risks of Compound and DeFi? The strength of DeFi protocols is their decentralized nature. At the same time, this is also one of their biggest weaknesses because if there is a bug in the smart contract, your funds are at risk. This has already happened with many DeFi protocols in the past and proves how much the sector is still in its infancy. The dangers are hard to assess or evaluate for the ordinary user, making the DeFi platforms with their attractive interest rates a big challenge, especially for beginners. Therefore, you should always invest only what you are willing to lose. There are insurances for losses caused by possible hacks, but the complexity for the ‘simple’ user is still high even here. Other threats include: Base layer risk (Ethereum 2.0) Errors in the economic design of a protocol (keyword governance) User error Bubble formation High transaction fees On top of that, Compound also offers its own risk: the liquidation clause. If you borrow money on the platform, you must post collateral. If the value of your borrowed cryptocurrencies exceeds your collateral due to price increases, liquidation may occur. Here, other users can step in to increase your collateral, but they also receive a discount and thus a real incentive to want to liquidate you. This is a dangerous game, which can quickly lead to mass liquidations in volatile markets, as in the Corona crisis. Now it should be clear to you that the topic of DeFi has many opportunities and risks that should not be underestimated. Conclusion and outlook Compound has managed to get the buzzword “Decentralized Finance” (DeFi) into people’s heads. It is fair to say that 2020 represents the breakthrough of the trend. Where the end of the development will be is hardly foreseeable at the moment. The value of curled values in DeFi applications is skyrocketing from one record high to the next day. The speed and rapid development offer opportunities and risks in equal measure. While some demonize DeFi, others see it as the foundation for the next big bull run in the crypto market. The future will show how sustainable this development is. However, we already see that Staking and DeFi are enjoying great demand and offer attractive interest rates to the cryptocurrencies. However, the development probably stands and falls with the further development of the blockchain protocols, which still lack a performant scaling and currently make the DeFi venture very expensive. Ethereum 2.0 will undoubtedly be one of the most important steps for DeFi. For Compound itself, the next steps will be to introduce new values in the Lending Protocol. Tokenized real assets are also to be listed on Compound in the future. Among others, the Japanese YEN, the US dollar, or even stocks like Amazon or Google. The possibilities through DeFi are limitless. The only question is how it will actually be accepted. We will, of course, keep you up to date here! I share more intimate thoughts in a monthly newsletter that you can check out here. Please let me know in a comment and join me on various social media platforms: Twitter ● Instagram ● Facebook ● Snapchat ● LinkedIn WHATEVER YOU DO, DO IT WITH LOVE AND PASSION!
https://medium.com/chainexplained/what-is-compound-comp-all-about-the-defi-lending-provider-2dee6b145b2a
['Lukas Wiesflecker']
2020-12-26 11:45:11.053000+00:00
['Compound', 'Investing', 'Defi', 'Technology', 'Lending']
865
Rate My Oppressor
How I overcame setbacks and built a Rails web application on top of my CLI project What Is Rate My Oppressor? Rate My Oppressor (RMO) uses OpenOversight’s database to display officers in the Chicago PD. Users can leave reviews on individual officers, contributing to my overall theme of holding law enforcement accountable to the communities they are sworn to serve. One of the biggest ways this can be done is via federal law. Here’s a really informative, easy to follow video on qualified immunity and the 2020 Presential election that partially inspired me to build RMO. I imagined RMO to be a continuation of my Popo command-line interface project. I wanted to bring some of the ideas I had then to the present and test how well of a grasp I had on Ruby thus far. Or at least that’s what I was thinking at first. How I Started I have a habit of going into project builds with a sense of purpose: I plan out my user experience, outline the key concepts I want to dive into, and complete project requirements line-by-line before tackling my stretch goals and front-end design. It’s a foolproof plan that’s gotten me through other tough projects in the past, so I’ve grown to stick to it when approaching a particularly difficult build. But this project was a different kind of monster. The dozens of folders kept me hopping from file to file trying to ensure my associations were neat. Scouring files made it a bit harder to find bugs, and scraping with Rails was an entirely new endeavor I had to learn piece-by-piece. That goes without mentioning OmniAuth and it’s inherent complexity! A lot of the time I was looking at my computer screen like this: How I Finished After finishing up my login, logout, and signup routes, I spent a decent about of time hauling over different gems that could scrape a Javascript heavy site like OpenOversight. I ended up coming across the Kimurai Gem that helped me find, iterate over, and display all of the officer attributes I wanted for my project alongside Nokogiri. Here are the guides I used to do that! Scraping with Nokogiri & Scraping with Kimurai Once I had the officers displaying on my Feed page, I worked on my Reviews class to make sure a user could leave a review for a particular officer and that a review could only be edited and deleted by the user that created it. Afterward, I had to tackle OmniAuth. To this day, I’m still reading over documentation, watching tutorials, and tuning in to MIT lectures to figure out how OmniAuth works under the hood. I’m even refactoring some of the code in Rate My Oppressor to better handle OmniAuth, improve usability, and user authentication protocols. What I Learned After watching a couple of walkthrough tutorials and reading through the Rails documentation, I found that Rails is a responsive and powerful framework that, when used in its full capacity, can really accelerate web app production. I can also see why it’s an essential framework to learn: Rails upholds convention over configuration. This means that “by giving up vain individuality, you can leapfrog the toils of mundane decisions, and make faster progress in areas that really matter.” This doctrine helps to eliminate redundancy and unnecessary deliberation, whilst also lowering the knowledge barrier for fledging software engineers like me! What is essential to remember — and is something I constantly have to remind myself of — is this: programming is always a work in progress. No one language is perfect and thus, no one engineer is inherently more capable than another. If you’re willing to learn as you go and put in the work to be a better you — you’ll always succeed because you’re progressing on your own time and with your own standard.
https://medium.com/@victorrw-vw/rate-my-oppressor-b211b92fc14b
['Victor Williams']
2020-10-27 18:01:30.179000+00:00
['Software Development', 'Rails', 'Engineering', 'Technology']
866
Step-by-Step Guide — Building a Prediction Model in Python
Understanding the Apple Stock Data Secondly, we will start loading the data into a dataframe, it is a good practice to take a look at it before we start manipulating it. This helps us to understand that we have the right data and to get some insights about it. As mentioned earlier, for this exercise we will be using historical data of Apple. I thought Apple would be a good one to go with. After walking through with me on this project, you will learn some skills that will give you the ability to practice yourself using different datasets. The dataframe that we will be using contains the closing prices of Apple stock of the last one year (Sept 16, 2019 — Sept 15, 2020). Read Data import pandas as pd df = pd.read_csv('aapl_stock_1yr.csv') Head Method The first thing we’ll do to get some understanding of the data is using the head method. When you call the head method on the dataframe, it displays the first five rows of the dataframe. After running this method, we can also see that our data is sorted by the date index. df.head() image by author Tail Method Another helpful method we will call is the tail method. It displays the last five rows of the dataframe. Let’s say if you want to see the last seven rows, you can input the value 7 as an integer between the parentheses. df.tail(7) image by author Now we have an idea of the data. Let’s move to the next step which is data manipulation and making it ready for prediction.
https://towardsdatascience.com/step-by-step-guide-building-a-prediction-model-in-python-ac441e8b9e8b
['Behic Guven']
2020-10-18 13:26:11.800000+00:00
['Machine Learning', 'Artificial Intelligence', 'Technology', 'Data Science', 'Programming']
867
A Beginner-Friendly Explanation of How Neural Networks Work
The Mechanics of a Basic Neural Network Again, I don’t want to get too deep into the mechanics, but it’s worthwhile to show you what the structure of a basic neural network looks like. In a neural network, there’s an input layer, one or more hidden layers, and an output layer. The input layer consists of one or more feature variables (or input variables or independent variables) denoted as x1, x2, …, xn. The hidden layer consists of one or more hidden nodes or hidden units. A node is simply one of the circles in the diagram above. Similarly, the output variable consists of one or more output units. A given layer can have many nodes like the image above. As well, a given neural network can have many layers. Generally, more nodes and more layers allows the neural network to make much more complex calculations. Above is an example of a potential neural network. It has three input variables, Lot Size, # of Bedrooms, and Avg. Family Income. By feeding this neural network these three pieces of information, it will return an output, House Price. So how exactly does it do this? Like I said at the beginning of the article, a neural network is nothing more than a network of equations. Each node in a neural network is composed of two functions, a linear function and an activation function. This is where things can get a little confusing, but for now, think of the linear function as some line of best fit. Also, think of the activation function like a light switch, which results in a number between 1 or 0. What happens is that the input features (x) are fed into the linear function of each node, resulting in a value, z. Then, the value z is fed into the activation function, which determines if the light switch turns on or not (between 0 and 1). Thus, each node ultimately determines which nodes in the following layer get activated, until it reaches an output. Conceptually, that is the essence of a neural network. If you want to learn about the different types of activation functions, how a neural network determines the parameters of the linear functions, and how it behaves like a ‘machine learning’ model that self-learns, there are full courses specifically on neural networks that you can find online!
https://towardsdatascience.com/a-beginner-friendly-explanation-of-how-neural-networks-work-55064db60df4
['Terence Shin']
2020-06-03 15:09:12.829000+00:00
['Artificial Intelligence', 'Data Science', 'Machine Learning', 'Technology', 'Education']
868
My Top 5 Kickstarter Projects — August 2021
With September in full swing, let’s take a look back at the best August ​Kickstarter projects. I’ve got a great round-up of products for you. __ NOSU is a dual-purpose stainless steel tumbler with built-in cutlery. NOSU, which is short for No Single Use, is the next big, simple solution to convenient and sustainable living. NOSU is perfect for active lifestyles and hot or cold beverages. With this tumbler, you’ll never forget your cutlery for onto the go meals ever again. __ There are many duffle and laptop bags available on the market. However, CACHE is unique in that it is a hybrid tech sling and duffle. It’s spacious, has quick access, is versatile and convertible. It comes in two color options, static and carbon, and is perfect for carrying your everyday essentials. In addition, its ergonomic shoulder strap is compatible with both left and right-handed people. __ Razor blades get dull quickly, and traveling with one isn’t always convenient. Erazor is a revolutionary ceramic blade portable shaver that won’t go dull for years. Erazor will altogether remove any hair beneath it and can quickly be charged with a USB-C cable. The material used in the blade is well known for its superior performance and is generally used for precision cutting. Did I mention that it’s easy to clean and will stay charged for up to 60 days with one charge? __ If your family is anything like mine, we have different remotes lying around everywhere. It’s always tough to figure out which remote operates what in our house. SofabatonX1 is the most versatile universal all-in-one smart remote. This remote can control all your entertainment and home devices. The remote’s battery life lasts up to 60 days when charged, and it also works with Alexa and Google assistant. The SofabatonX1 is perfect for de-cluttering your remotes. __ Finally, ​the Culla Blanket by Crua is the ultimate outdoor blanket. Culla Blanket is a graphene-infused, lightweight, water-resistant, tear-resistant outdoor blanket. This blanket is the perfect addition to your outdoor adventures. The blanket is sure to warm you up, versatile, and can be used anyway, from a picnic blanket to a sleeping bag.
https://medium.com/@roymorejon/my-top-5-kickstarter-projects-august-2021-255dd2708061
['Roy Morejon']
2021-09-06 12:24:40.849000+00:00
['Innovation', 'Crowdfunding', 'Technology', 'Kickstarter', 'Gadgets']
869
How to Do a Good Code Review?
As a code reviewer, you have the power to approve any code, and along with that comes the responsibility to make sure that the code is in good condition. In this article, we will go through a listing of questions and points that could help code reviewers focus on what matters during a code review. Credit: Ebenezar John Paul If you are reading this article, you might find the following article helpful Here are some of the questions that you can ask yourself when reviewing a changelist (pull request) Design related aspects of code under review Is the code well-designed? Does the code demonstrate low coupling and high cohesion? Should the code be moved elsewhere? Does the code interact well with the existing code and features? Functionality-wise, does the code do what the developer intends to do? Is the code written well-enough for both end-users and future developers to maintain and extend upon? Is the code too generic and over-engineered? Are there any edge cases, i.e., concurrency, deadlocks, or race conditions? Is the code relating to the web layer, business logic, and database in their appropriate layers? For backend development, do the endpoints return Data Transfer Objects (DTOs) or the business model (Entity)? Carefully review any dependency changes in code I do not suggest that you re-invent the wheel and avoid dependencies all-together however, I do strongly recommend that you think twice before adding a dependency to your codebase. The future of open source projects is not in our hands, and their trajectory could affect products that rely on them. For example, if a dependency is not maintained anymore and there are security or other types of vulnerabilities then you are stuck with it. Depending on the dependency, a majority of times it is really complex to remove or replace a dependency from a production system without affecting the users. Another example, the more the dependencies the higher the maintenance of the product and this is because upgrading dependencies is a bumpy ride and depend on quite a few factors How outdated the dependencies are? Are there any breaking changes in dependency versions? Are there official migration guides? Is the dependency being forked to accommodate the product’s custom needs? All of the above decides how complex, and expensive the upgrade process is going to be and how would it affect your customer base. Develop an approval process for new dependency requests and define which team will own the dependency and maintan it regularly. Here are some of the questions you can ask yourself as a code reviewer Can the addition of the new dependency be justified? What are the pros and cons of these new dependencies? What other transitive dependencies are being pulled via this new dependency, and will there be any conflicts with existing dependencies? When upgrading/dropping an existing dependency, are we confident there will be no runtime exceptions directly from code or indirectly? Is the project active and maintained? How many open issues are there? What is the test/code coverage? How many active contributors are there? How old is the project, and are there any alternatives? Avoid forking and changing libraries at all cost Forking and changing an open-source library should never be an option, and that’s because it is too expensive to keep the forked version of the library in sync with the original. The biggest challenge with forking a library is keeping the fork in-sync with the original library as it evolves over time. Let’s say, you forked Project A and added a bunch of code to support your product’s needs. Fast-forward a few months and years down the road, Project A has moved on from when you forked it and there are a bunch of new versions and you would like to upgrade Project A. The challenge is to make sure that your custom code works with all future versions of Project A, without breaking anything. It’s almost always a bumpy ride. Good luck. If you must fork, then make sure to keep your fork as minimum as possible and do your ultimate best to contribute back your changes to the original codebase. Is the code forking an open-source library? Can the fork be justified? Why are we forking? Is it worth it? What alternative options do we have instead of forking? Are the original code and forked/changed code on separated commits? Watch out for tech debt during the code review Tech debt is almost always the result of a quick and dirty approach of coding a fix or feature due to time constraints. Technical debt (also known as design debt[1] or code debt, but can be also related to other technical endeavors) is a concept in software development that reflects the implied cost of additional rework caused by choosing an easy (limited) solution now instead of using a better approach that would take longer. — Wikipedia It might be difficult to avoid tech debt for new companies that do not have the financial and technical resources available to them and this is the phase where a huge volume of tech debt is added to the codebase because, the objective is to get a product that works in the market, not built a perfect product. Tech Debt, Credit: Dilbert by Scott Adams There is no excuse for adding tech debt to an established product that has a healthy customer base, financial and technical resources. Such companies should focus on addressing the tech debt that was added in the earlier stages of product instead of adding new tech debt. Do the new changes degrade the qualify of the existing code? Do the new changes make a method/class candidate for refactoring? Does the changelist (pull request) introduce tech/code debt? Are there any duplicate blocks of code that could be made reusable? Tests During the code review, tests are probably the one area of the pull request (changelist) that would get skimmed through or ignored. Simply because a test is passing it does not mean that it is a valid test. Are the tests correct? Do they fail when the code is broken? Do the tests cover all branches (execution paths) and instructions in the code? Are the tests independent of each other? Are the tests easy to understand and maintain? Does this feature need to be tested under load to make sure that it scales well and that there are no memory leaks? Adding dead code Sometimes the developer will say, well, we don’t need this block of code now but, maybe we will need it in the future. Or, a developer would make a method so generic with the hope that it covers future requirements. Let’s cross that bridge when we get there. Don’t add dead code and hope that it resurrect in the future. Such dead code should not be approved because there is a very high chance that the code is never used in the future or it won’t be bullet-proof enough to accommodate future needs. Credit: Oliver Widder Is it the right time to add this functionality? Does the code address only the acceptance criteria and nothing more or less? Easy To Read and Maintain Sometimes developers write super complex code for something straightforward because it makes us feel good. This is usually the case with junior or new developers. Complexity-wise, is the code as a whole or, in part, challenging to read and understand? Is the code easy to maintain and extend? Is the code easy to test? Naming and documentation is hard Let’s admit it, naming is very hard and that’s why it needs the code reviewer's attention. Credit: CommitStrip.com The questions you should ask yourself as a code reviewer Do the names easily communicate what the item is or does? Are the code comments easy to understand? Is the code over-commented or not commented at all? Is the comment necessary? Can a complex piece of code be refactored instead of adding comments? Are the code comments meaningful and gives more information than the actual code? Are all the documentation, i.e., test, build, release relating to the code change updated? Not all the questions might be relevant for your specific case, feel free to pick those that matter the most.
https://medium.com/swlh/how-to-do-a-good-code-review-c2cab4ef32bf
['Rafiullah Hamedy']
2020-08-25 03:29:03.036000+00:00
['Programming', 'Java', 'JavaScript', 'Software Development', 'Technology']
870
Maddie Stone published a generous, thoughtful, and mind-expanding piece exploring the implications…
Maddie Stone published a generous, thoughtful, and mind-expanding essay exploring the implications of the near future extrapolated in my latest novel: Geoengineering, or hacking the planet to cool it down, is either a maniacal plan dreamt up by foolhardy scientists or a useful tool for staving off climate catastrophe — maybe both. It raises hard questions about what sorts of sacrifices humanity may have to make for the greater good and who gets to decide; questions that beg for nuanced conversations about the social, environmental, and political risks and rewards. Yet in science fiction, geoengineering tends to get treated with all the nuance of Thor’s hammer striking a rock monster. Which is why Eliot Peper’s recent novel Veil, set on a near future Earth beset by climate crises, is such a refreshing read. This book gets geoengineering right by showing that there are no obvious right answers.
https://eliotpeper.medium.com/maddie-stone-published-a-generous-thoughtful-and-mind-expanding-piece-exploring-the-implications-4456da8dfa42
['Eliot Peper']
2020-12-09 04:37:08.468000+00:00
['Technology', 'Climate Change', 'Future', 'Science', 'Books']
871
December 2020 Deals Recap
As we approach the winter holidays we have a final monthly deals recap market map for you. With a new year, and a light at the end of the tunnel (vaccine rollouts), we are looking forward to a better, brighter, and healthier 2021. Things are looking bright for New England, as capital floods into biotech, deeptech, and just about all tech in the region. Again, we’re thankful for all you founders and investors continuing to move forward with your plans to make the world a better place! Now, onto the deals. [NOTE: Round info per Crunchbase reporting]
https://medium.com/the-startup-buzz/december-2020-deals-recap-5b36019ab47a
['Matt Snow']
2020-12-22 20:02:37.049000+00:00
['Technology', 'Fundraising', 'Startup', 'New England', 'Venture Capital']
872
Is Maxar Technologies making money in space? — Market Mad House
Satellite builder Maxar Technologies Inc. (NYSE: MAXR) has become a stock to watch. In 2020, Maxar’s share price grew from $16.60 on 2 January to $27.25 on 23 November. I think all the publicity about SpaceX and Blue Origin’s launches drives interest in space stocks. For instance, SpaceX’s Crew Dragon just took four astronauts to the International Space Station (ISS). Moreover, SpaceX’s Falcon 9 rocket landed on a barge in the ocean. Moreover, Blue Origin has several contracts with NASA, including an effort to develop a space-robot operating system, Techcrunch reports. Blue Origin is also developing the Integrated Lander Vehicle, a next generation moon lander and part of NASA’s Human Landing System (HLS) program, a press release indicates. Blue Origin claims to have tested precision lunar landing technology with its New Shepard rocket. Sorry no SpaceX and Blue Origin IPO soon However, both Blue Origin and SpaceX are private companies. Moreover, I doubt those companies’ billionaire owners have no incentive to hold initial public offerings (IPOs). To explain, Jeff Bezos’ the world’s richest man owns Blue Origin. Bezos has no need of money. Moreover, I think Bezos and SpaceX owner Elon Musk fear shareholders could make them abandon their dreams of colonizing space. However, the two billionaires do not whatever they want in space with private companies as long as they spend their own money. Thus, people who want to invest in space but want to stay away from defense contractors such as Northrop Grumman (NOC) will look at Maxar (MAXR). What is Maxar Technologies? Maxar Technologies (MAXR) builds space vehicles such as the Galaxy 37 geostationary communications satellite. Maxar is one of four recent satellites Intelsat purchased from Maxar. Intelsat has bought 59 spacecraft from Maxar since the 1970s, Megan Fitzgerald, Maxar’s Senior Vice President of Space Programs Delivery claims. Intelsat is contracting with Maxar for its next generation 40 geostationary communications satellite. Maxar also built the BSAT-4B Satellite for the Broadcasting Satellite System Corporation. The BSAT-4B can broadcast 4K and 8K Ultra HD (high definition) television to Earth. Maxar has been building satellites for over 60 years. Maxar claims to have built over 90 communications satellites in that time. Uses for Maxar satellites include communications, mapping, surveillance, intelligence-gathering, and imagery. Thus, Maxar sells space products for which companies will pay money. In contrast, SpaceX and Blue Origin’s businesses are theoretical. For instance, nobody will pay money for a space habitat or a Mars colony unless those things exist. Is Maxar Technologies Making Money? Maxar (MAXR) makes some money. It reported a quarterly operating income of $7 million on 30 September 2020. In addition, Maxar made a $106 million quarterly gross profit on quarterly revenues of $436 million on 30 September 2020. Maxar is making less money in 2020. Maxar reported a quarterly gross profit of $122 million and a quarterly gross profit of $252 million on 31 December 2019. Moreover, Maxar’s quarterly revenues rose from $410 million on 31 December 2019. Stockrow estimates Maxar’s revenues grew at a rate of 5.57% in the quarter that ended on 30 September 2020. What Value does Maxar have? Maxar (NYSE: MAXAR) generates some cash, it reported a $96 million quarterly operating cash flow on 30 September 2020. The quarterly operating cash flow rose from $51 million on 30 June 2020 and fell from $171 million on 31 December 2019. Notably, Maxar’s quarterly ending cash flow fell from $152 million on 30 June 2020 to -$119 million on 30 September 2020. Maxar had $60 million in cash and short-term investments on 30 September 2020. The cash and short-term investments fell from $177 million on 30 June 2020. Maxar offered a little value in the form of $4.565 billion in total assets on 30 September 2020. Maxar’s total assets fell from $5.157 billion on 31 December 2019. Is Maxar a Value Investment? Maxar will pay a small quarterly dividend of 1₵ on 31 December 2020. Thus, Maxar is a cheap space technologies stock that pays a dividend. However, I think the dividend is too small to make a difference for ordinary people. Thus, Dividend.com estimates Maxar shares offered a 4₵ annual dividend and a 0.14% dividend on 23 November 2020. If you are looking for a cheap value investment in space technology, I think Maxar is worth a look. However, if you want a stock that makes money, you need to avoid Maxar (NYSE: MAXR).
https://medium.datadriveninvestor.com/is-maxar-technologies-making-money-in-space-market-mad-house-68b9d4416678
['Daniel G. Jennings']
2020-12-03 15:50:58.516000+00:00
['Satellite Technology', 'Space', 'Spacex', 'Blue Origin', 'Elon Musk']
873
Making sense of the RAFT Distributed Consensus Algorithm — Part 2
In the part 1 of this series, we got the basics of Raft. Please go through that part first if you have not already. In this part, we’ll concentrate on detailed Raft replication technique. The concepts explained here are very important & core to Raft. Raft Replication The core idea behind any consensus algorithm is that for a given data ( key ), at any point in time, either all or majority of the cluster members should return the same value. Raft uses replication technique to achieve it. Replication has been used to build fault tolerance & redundancy in distributed systems since ages. Raft maintains identical replicated state machines across all nodes by making sure that the client commands saved in the logs are identical and are in the same order. Once a leader is elected, the high level replication process looks like this: Figure 10: High level replication, Courtesy: The Raft extended paper Consensus module shown above is a logical layer that exists in all the nodes. At a high level, it accepts a client command, takes care of replicating & committing it as per the algorithm: Step 1: Client ( i.e; a distributed database system ) sends a command ( i.e; something like an INSERT command in SQL) to the server. Step 2: The consensus module at the leader handles the command: puts it into the leader’s log file & sends it to all other nodes in parallel. Step 3: If majority of the nodes including the leader replicate the command successfully to their local log & acknowledge to the leader, the leader then commits the command to its own state machine. Step 4: The leader acknowledges the status of the commitment to the client. Q. How is a log entry represented? A. A log entry typically contains the following information: index: An increasing sequence number for each entry in the log term: counter value indicating the current term when the leader receives the entry from the client command: The actual client data that we want to store in the system Q. What is the importance of majority / quorum in the context of raft replication? A. Writing to a majority ( 3 out of 5 nodes in a 5 node cluster as example ) makes sure that as long as any 3 nodes are alive & connected to the cluster, we won’t not lose data since it exists in at least one of the nodes. To guarantee no data loss in the event of a failure, Raft uses quorum. Q. Do participating nodes manage any states / variables? A. Following states are managed by all of the nodes. The index in log or index arrays are 1 based in the below description, however you can use zero-based as well. Carefully read the following diagram as these states / variables are going to be used heavily in further discussion. Figure 11: Node variables Q. What is the significance of lastApplied ? A. lastApplied keeps track of the index in the local log till which entries got applied in the local state machine of a node. Remember that committing an entry to the log does not mean it’s applied immediately. Typically the component that applies entries to the state machine is different than the one which handles committing to the log. Assume that separate threads are used for performance improvement purpose. Hence lastApplied state is managed separately. commitIndex is usually few milliseconds ahead of lastApplied although eventually lastApplied catches up with commitIndex . Q. What is the significance of matchIndex[] ? A. Let’s look into this after we discuss Algorithm 2 in later section. Let’s take a looks at how the RPC request & response structures look like. Their structure is going to be very import in upcoming discussion. They are pretty much self explanatory below: AppendEntries (AE) RPC Request & Response: Figure 12: Append Entries RPC request response RequestVote (RV) RPC request & response Figure 13: Request Vote RPC request response Important Observation: When a node receives AppendEntries & RequestVote requests from a sender, it returns its current term in the response. It causes the sender to update its own term in case the current node has higher term so that eventually all the nodes can agree on the appropriate term. Term update happens the other way also i.e; if the sender has higher term, the receiver updates its own term when it receives a request. As stated earlier, each node manages its own term. So this mechanism is crucial to ensure that terms across nodes converge to a single monotonically increasing value eventually. As you can see in Figure 11, the leader keeps track the highest committed log entry index in a variable called commitIndex in its volatile state, it sends this value in AppendEntries RPC in the field leaderCommit so that the followers get to know the latest leader commit index & they can commit those entries as well. We’ll see how it happens when we discuss the related algorithms in details later. As stated earlier, Raft ensures that logs across all the nodes are exactly the same. It makes the log abstraction simple but Raft employs certain techniques to give proper safety guarantee around this design decision. Raft Guarantees Raft is built around certain properties & it guarantees that these properties always hold true: Leader Election Safety For a given term, at most one leader would be elected. Since we have already seen that Raft uses quorum, unless a candidate gets (N/2) + 1 votes in the election process, it can’t become the leader. This means at most one leader would be chosen in a term. Append-Only Leader A leader never overrides or deletes an entry in its log. It just keeps on appending. Log Matching Property Property 1: If two entries in different logs have the same index & term, they store the same command. Since the leader stores the given command only once at a certain index with a certain term, this property is always true. For empty logs comparison also, this property holds good. If two entries in different logs have the same index & term, they store the same command. Since the leader stores the given command only once at a certain index with a certain term, this property is always true. For empty logs comparison also, this property holds good. Property 2: If two entries in different logs have the same index & term, then the logs are identical in all preceding entries. Leader Completeness Once a log entry is committed in a given term, it will be present in the logs of the leaders for all higher terms. So committed entry never gets lost even though the leader changes. State Machine Safety If a server has applied a log entry at a given index to its state machine, no other server will ever apply a different log entry for the same index. Remember, whatever algorithms, success or failure cases we discuss later, they revolve around these properties. Important Algorithms Since, we are diving deep into the replication section, let’s look at the following sample code snippets to understand the algorithm both at the leader & follower side. The code will help you to visualize different variables & states managed by the servers (it’s written in GoLang however you don’t need to know GoLang to understand this piece): Note: We’ll focus only on the core parts of the algorithms, concurrency constructs like locking, sending message through channels in GoLang and all other stuffs are out of context of this article. Leader Side Log Replication Algorithm Let’s assume that we already have an acting leader. The algorithm describes what happens afterwards: Algorithm 1: Leader Append Entries Steps Line 6: Once a node is chosen as the leader, the consensus module assigns the Leader state to the node. Line 8–11: nextIndex[peerId] for each peer nodes is initialized to the length of the log of the leader. Remember, in the code, the log index is 0 based. When the leader starts up, it can only assume that all other peers are as up-to-date as it is. In case, some peer is lagging behind, the nextIndex[peerId] for that peer would be adjusted accordingly. Also the matchIndex[peerId] for each peer is initialized to -1 — it keeps track of the maximum log index till which the peer is exactly matching with the leader. Line 15: The leader starts a timer of 50 ms ( you can choose any appropriate timeout value ) to periodically send AppendEntries RPC to the followers. If there is no log entry to send, an AppendEntries with empty entries[] is sent which is considered as heartbeat. Heartbeat is necessary to prevent another unnecessary leader election when the current leader is working fine but it has no entry to replicate yet. Typically, when the leader starts, it immediately sends a heartbeat to the followers. Line 19–29: The leader keeps on sending AppendEntries RPC in the background. Line 35: The method name leaderSendHeartbeats() is misleading here since it’s actually sending AppendEntries RPC. As stated earlier, if there is no logs to send, that RPC can be called a heartbeat. However, even if there is some logs to send, still it can be considered as heartbeat only as any AppendEntries RPC logically means the leader is healthy, that’s why it’s sending these messages. Line 43–58: This is the AppendEntries request preparation phase. For log consistency checking purpose, we need to send previous log entry index & term, prevLogIndex is initialized with nextIndex[peerId]-1 ; if the log is empty, nextIndex[peerId] = 0 , prevLogIndex = -1 & prevLogTerm = -1 . Followers would be able to handle these negative values. If some log entries exist from nextIndex[peerId] , the RPC would send all of them in the entries[] array. If no suitable log exists, entries[] is empty. Note that line 57 sends current leader commit index to the followers — it helps the followers to identify till what index, the leader has committed the logs, depending on it, followers can also commit their logs to their individual state machines. Line 62: AppendEntries RPC is sent to all the followers in parallel. Line 65–69: AppendEntries response from a follower contains its current term. In case the follower is a more suitable leader than the current leader, the response contains higher term than the current leader’s term. In that case, the current leader steps down, becomes a follower, resets election timer & other states as necessary. This step ensures that, Raft has a single leader & the most suitable leader is the given preference. We’ll see the leader election algorithm in later sections to clarify this part. Line 71–75: If the current leader is the most suitable one & the follower is successfully able to replicate entries[] to its log, it returns success = true in the AppendEntries response. At this point, the leader adjusts nextIndex[peerId] to previous value + length of whatever logs it sent in the request for that follower, matchIndex[peerId] is set to nextIndex[peerId]-1 since the follower is successfully able to replicate all the entries sent. Line 77–90: In this step, the leader finds out the maximum log index since the last commit index, till which majority of peers (including the leader itself) have been successfully able to replicate the leader logs — this becomes the leader’s new commitIndex . As we have seen the matchIndex[] keeps track of log index for individual replicas till which leader logs have been successfully replicated, the leader uses this information in this step to figure out the commit index for the current term here. In the next AppendEntries RPC call, the leader would pass the new commit index in leaderCommit field. The followers would apply some logic using this field to identify till what index they could commit their individual logs. In case, there is no change in the commit index since the last one, no issues would happen as there is nothing to commit. Line 95–98: In line 71–75, the follower successfully replicates the leader log & returns success because the leader log’s prevLogIndex & prevLogTerm are matching with the follower ( we’ll see this logic in the following section ). In case, the follower could not cope up with the leader ( probably because it crashed mid-way or network partition happened ), these values won’t match. So the follower returns false in AppendEntries response & the leader adjusts nextIndex[peerId] for the follower by decrementing it. Unless the follower catches up with the leader, this process is retried. There can be possible optimizations to reduce number of such calls here, it’s out of scope here. Follower Side Replication Algorithm The algorithm to append entry in the follower is relatively easier & goes like below: Algorithm 2: Follower Append Entries Steps Line 19: the method parameter args AppendEntriesArgs represents the AppendEntries RPC request being passed by the leader to the followers. Line 27: If the leader has higher term, then the receiving node updates its term, resets election timer, in case it’s not in Follower state, it becomes a follower. Line 33: If the leader’s current term matches with the current term of the follower, the follower attempts to replicate the log. Line 42–43: The follower checks whether its log at index prevLogIndex matches with prevLogTerm — both sent in the RPC request by the leader. If the terms don’t match or the follower has a shorter log than the leader, the request fails, false is returned as response to the RPC & step 12 of the Algorithm 1 retries unless it finds an index in the follower log where both the leader & follower terms match. This step competes the log consistency check mentioned in the step 6 of the Algorithm 1. Line 49–71: AppendEntries RPC is idempotent. Consider a scenario where the leader sends a valid AppendEntries request to a follower, the follower replicates the entries properly but fails to acknowledge success to the RPC probably due to a temporary network glitch. Since the leader has not received the response after some time, it retries the same request. So the follower should be able to identify that the log entries in the request have been already applied. It should not re-replicate the entries again, thus it can save some disk IO as it’s a duplicate request. This is what the code segment does here. It tries to find all matching log entries by index & term for the request. The moment either the end of log file is reached or a mismatch is found in the logs, the remaining entries in entries[] of the AppendEntries request gets replicated. So if log entries exist after the found mismatch index in the follower log, they get overridden. Look at step 11 of the Leader side algorithm: we mentioned that leader sends its commit index to the followers in the AppendEntries RPC call. Line 74–77 :This code segment identifies the commit index at the follower side. The minimum of leader’s commit index & follower log’s size-1 (0-indexed log as we mentioned earlier) is taken as the commit index. The log is then applied to the follower’s state machine. Q. What is the significance of matchIndex[] in the leader, how is it different from nextIndex[] ? A. matchIndex[peerId] in the leader keeps track of till what index in that particular follower log, entries exactly match with the leader. The follower can still have more logs than that index probably not committed yet because in Algorithm 2 line 75, we observe that followers commit logs only when they get some appropriate leader commit index in AppendEntries RPC from the leader i.e; the leader commits some log, transmits the commit information in the next AppendEntries RPC, so followers wait till the next RPC before committing any log entry. Hence there is a possibility that nextIndex[peerId] points to an index in the follower log which is not committed yet in the follower. So logically the leader can replicate logs from matchIndex[peerId] + 1 to nextIndex[peerId] to the follower. Eventually in an ideal scenario, matchIndex[peerId] matches with nextIndex[peerId] . In another words, nextIndex[peerId] is the best guess by the leader about a follower’s log from what index to replicate next, if the follower is not up-to-date, the leader adjusts this value as observed in Algorithm 1, line no 96. matchIndex[peerId] is the exact index till what they are currently matching at some point in time. In the “Leader Election” section earlier, we saw the basic leader election scenario when the cluster starts up i.e; there is no log in the system. Since we have already seen replication algorithm both at the leader & follower side, it’s a good time to examine how presence of logs in the system affect the leader election & voting process. Leader Election Algorithm In the Presence of Logs or No Logs In the absence of the leader, any node whose election timeout expires can initiate voting request process with the hope of becoming leader. Algorithm 3: Leader Election Steps Line 6: The current node assign Candidate state to itself since only candidates can participate in voting. Line 7: The candidate increases its current term since the previous term expired with the previous leader. Line 10: The candidate votes for itself. Line 19: The candidate retrieves the last index & corresponding term of its log entry. Line 22–26: With above information, the candidate initiates the RequestVote RPC process. Line 31: Other nodes / peers are asynchronously called with RequestVote RPC. Line 41–44: If any peer responds back to the RPC with higher term number, that peer is possibly ahead in the race to become the leader. So just to make sure that the leader election is smooth & only one leader exists at a point in time, the candidate steps down, becomes a follower & resets its election timer. Line 45–53: If majority of the peers respond back with the same term as the candidate & grant votes for the candidate (including the candidate’s vote for itself as we saw in step 3), the candidate wins the election & becomes the new leader. Hurrah!!!! Q. What happens if the process fails to elect a leader? A. The process continues till the time some candidate is elected as the leader. Typically with one or two trials, the leader should be elected. Q. Can there be multiple nodes in Candidate state at the same time? A. Yes, it’s likely a possible case. There can be competition also among multiple candidates, however random timeout as discussed earlier is designed to minimize such competition. The leader election process looks quite simple, however not every candidate can win the election. There are certain rules which peers follow when they vote for a candidate to make sure that the elected leader is the most suitable one. Vote Request Algorithm Algorithm 4: Request Vote Steps Line 17: A candidate invokes its peer’s request vote RPC with request parameters represented as args RequestVoteArgs . Line 23: The peer retrieves the last log index & corresponding term from its own log. Line 26–29: If the requesting candidate has higher term than the peer, the peer can step down to become a Follower as it does not make sense to continue as a Candidate since it would never gather vote to win any election due to the lower term. It updates its term to the same as the requester. Line 31–37: These code block is very crucial as it determines who can vote for a candidate & who can’t. This code block conforms to the “Leader Election Safety” guarantee property of Raft. Following checks are performed while voting: - If the peer’s term does not match with the candidate’s term, no vote is granted. - votedFor indicates which candidate the peer has already voted for in the current term. A peer can vote for only one candidate for a given term. RequestVote is idempotent i.e; if the peer has already voted for a candidate C & the same request is retried again, the peer can still grant vote for C . So the peer grants vote for a candidate only when it has not voted for any candidate yet or it has already voted for the same candidate earlier. Vote is rejected if none of these match. - The most important condition is: if the requesting candidate’s log is at least as updated as the peer, then only vote is granted. It means if the candidate’s lastLogTerm in RequestVote RPC is greater than the last log term in the peer it can be granted the vote. In case, both the last log terms are same, the candidate should have equal or longer log than the peer. Otherwise, the vote can’t be granted. This step is to ensure that we don’t lose any data by mistake while choosing the leader. Once vote is granted for a candidate,the peer updates votedFor state with the candidate id. Let’s take an example to understand the voting & replication with little more details: Figure 14, Courtesy: Raft Thesis Step a: S1 is elected as the leader. Current term is 2 . Let’s say X = log entry at index 2 of S1 with term 2 . Before replicating X to the majority S1 crashed. Step b: S5 becomes the new leader with votes from S3 , S4 & itself since its term & log are updated as much as that of both S3 & S4 . After being elected, S5 accepts a new entry at index 2 with term 3 , let’s call it Y . Q. Why is S5 chosen with term 3 not 2? A. When S1 is elected as the leader for term 2 , the use case assumes that S5 also votes for S1 along with others. So at least the majority of the nodes including S5 knows that the term 2 is the highest term seen till now in the cluster, hence they update their local term as described in Algorithm 4 line 26. While initiating voting process for the next leader election, S5 naturally asks for leadership in term 3 . Step c: Before replicating Y to the majority, S5 crashed. New election happens. Both S1 & S2 can win the election since among S1 , S2 , S3 , & S4 , they are the most updated with logs & there is a clear majority. S1 wins the election for term 4 . It continues replication & replicates X to S3 . Note that, X is replicated to the majority but uncommitted - since S1 ’s current term 4 is greater than X ’s term 2 , S1 can’t commit the entry as Raft does not allow committing logs from previous entries ( see lines from 78 to 85 in Algorithm 1). Even though some entry from a previous term gets replicated to some follower, Raft does not keep track of how many replicas replicated that entry — Raft design is simple, it keeps track of replica count for entry belonging to the current term. Once an entry from the current term is committed, older entries get committed indirectly anyway. Q. What if the current leader does not get any entry / client request in the current term, in that case, won’t the uncommitted entries belonging to any previous term get committed? A. In order to tackle this scenario, once a new leader is elected, it can add an empty marker log entry with current term to its log & immediately sends an empty AppendEntry RPC to the peers which can force the followers to replicate all the currently existing logs in the leader log. Replication then happens as described earlier in Algorithm 1 & 2. Coming back to step c, S1 accepts a new entry at index 3 with term 4 , let’s call it Z . S1 now crashes. Depending on the replication status of Z , there are now two choices: Step d1: If Z is not replicated to the majority by S1 yet, S5 can win the election again with votes from S2 , S3 , & S4 since the last long entry of S5 i.e; Y has term 3 whereas S2 & S3 have term 2 in their last long entry. If S5 is elected, it can override entry at index 2 of all other nodes — so X gets lost now from index 2. Since X is replicated to the majority but not committed yet, the client is still waiting for a success response from the leader, so we can take the risk & the override can happen. Hence all nodes ends up with Y at index 2 . Step d2: But before electing S5 as the new leader, if Z gets replicated to S2 & S3 , the replication gets majority. In that case, even though Z is not committed yet, S5 can’t win the election as its log is not as updated as the majority ( S5 has last log with term 3 , whereas, majority has last log with term 4 ). Hence if S1 wins the election again, it commits Z & indirectly X also gets committed. Pull vs Push Model Raft is based on Push model where the leader takes the responsibility to keep track of the next index, match index of all the followers & drives the replication process in the cluster. However it’s not a mandatory thing. Depending on your system, you could choose Pull model as well where a follower takes care of its own replication, you need to make necessary changes in the code to achieve that. Quick Summary Raft works on the principal of distributed log replication. Logs are replicated to the followers by the leader. It ensures that logs across the nodes in the cluster are in the same order as the leader. The leader sends log replication request through AppendEntries RPC to the peers in parallel. RPC to the peers in parallel. AppendEntries is treated as heartbeat when there is no entries present in the request. is treated as heartbeat when there is no entries present in the request. Raft replication revolves around five guarantees — Leader Election Safety, Append Only Leader, Log Matching Property, Leader Completeness, State Machine Safety. Leader log never gets overridden. Both AppendEntries & RequestVote RPC requests carry current term of the requester to the peers so that peers can update their own term if they are lagging behind. & RPC requests carry current term of the requester to the peers so that peers can update their own term if they are lagging behind. Similarly peers also pass on their current term to the requester through AppendEntries & RequestVote RPC response so that the requester can update its term in case it’s lagging behind. & RPC response so that the requester can update its term in case it’s lagging behind. The leader sends previous log entry & term to the follower in AppendEntries RPC. The followers checks whether its last long entry & term matches this information. If no match is found, the logs are not identical, the leader adjusts these values for the follower & sends the RPC again. The process goes on till the time the leader & the follower come to a mutual decision that their logs are matching at some point. This step is used as log consistency check. Once the logs match, the follower can override non-matching log entries with new entries. RPC. The followers checks whether its last long entry & term matches this information. If no match is found, the logs are not identical, the leader adjusts these values for the follower & sends the RPC again. The process goes on till the time the leader & the follower come to a mutual decision that their logs are matching at some point. This step is used as log consistency check. Once the logs match, the follower can override non-matching log entries with new entries. The leader only commits log entries from the current term by counting whether majority of the followers have replicated the entry. When there is no leader in the cluster, a random node becomes a candidate since its election timeout gets over. It votes for itself & sends RequestVote RPC to all other nodes. RPC to all other nodes. While voting, in order to ensure no data gets lost in the system, it’s very important to guarantee leader election safety. Any candidate whose last log term & index is as updated as a peer is granted vote by the peer. If a log entry is replicated to the majority, irrespective of whether it got committed by the previous leader or not, the new leader that gets elected would consist the log entry in its log. Any node can vote only once per term. If a node has already voted for a candidate & the same candidate requests the same peer again for vote in the same term, the vote can be granted. Raft never directly commits any entry from the previous term in case the entry is not committed. While committing any entry from the current term, entries belonging to previous terms get committed indirectly. Take some time to understand these algorithms, If required, re-read. In the next part of this series, we’ll discuss & simulate some cases with Raft simulator which hopefully makes these algorithms very clear to you. Reference
https://codeburst.io/making-sense-of-the-raft-distributed-consensus-algorithm-part-2-4f12057b019a
['Kousik Nath']
2021-02-11 05:47:19.174000+00:00
['Distributed Systems', 'Software Development', 'Technology', 'Computer Science', 'Algorithms']
874
How I Build Machine Learning Apps in Hours… and More!
How I Build Machine Learning Apps in Hours… and More! What is new in the AI world, the release of our book, and our monthly editorial picks If you have trouble reading this email, see it on a web browser. Happy Monday, Towards AI family! To start your week with a smile, we recommend you to check out “Superheroes of Deep Learning Vol 1: Machine Learning Yearning” by Falaah Arif Khan and Professor Zachary Lipton, an exciting, hilarious, and educational comic for everyone who is or has worked with data in the past. If you are into research, NeurIPS recently posted its findings during the 2020 paper reviewing process, with some insights on the submission and historical data on primary subject areas, acceptance rate, ratings, and so on for the past two years. Next, if you have a Ph.D. and you are in the job market for a faculty position, we recommend you to check out the faculty openings in the Machine Learning Department at Carnegie Mellon. They currently have multiple tenure track and teaching track opportunities for you researchers out there! If you are into tinkering with data and you are interested in forecasting epidemics (specifically COVID-19 in this case). We recommend you to check out this post by Kathryn Mazaitis and Alex Reinhart on how to access COVIDcast’s Epidata API, which provides freely available data to CMU Delphi’s COVID-19 surveillance streams. 📊 For a limited time, we are taking discounted pre-orders on our book “Descriptive Statistics for Data-driven Decision Making with Python” — a guide to straightforward, data-driven decision making with the help of descriptive statistics. Ordering our book also gives you access to any future updates made to it — support Towards AI’s efforts and help us improve to provide you with better content. 📊
https://medium.com/towards-artificial-intelligence/how-i-build-machine-learning-apps-in-hours-and-more-486955768aa1
['Towards Ai Team']
2020-11-12 21:32:15.603000+00:00
['Technology', 'Innovation', 'Artificial Intelligence', 'Education', 'Science']
875
Remembering your Why
Anyone familiar with the work of Simon Sinek will be familiar with his “Start With Why” philosophy, putting your purpose, your ‘Why’ at the core of what you do. Whilst I think most businesses to aim to adhere to this, it’s very easy to become so immersed in the day to day of what you are doing that it can be difficult to lift your head up and remind yourself why you are doing it. This week, I’ve had a few great reminders of exactly why we are building www.my2be.com The original purpose behind my2be was to help connect everyone with Mentors to help them through their education, career and life, but also to help people connect informally, so they can at least get some answers to help them progress. This was born from personal experience, I graduated university the same summer as the global recession hit in 2008, and the job market was turned upside down. This gave birth to lots of bogus companies that were basically pyramid schemes dressed up as marketing companies, pedalling people for little or no pay, and promising them the world in a matter of weeks. I was reminded of this this week whilst I was in a coffee shop when I saw two young lads going through an induction to such a company. Ironically it was at Manchester Piccadilly Train Station, the same station I walked away from such a job. I was instantly reminded of my experience and remembered that the only reason I even ended up there was because I felt like there were no other real options. How wrong I was, there is a plethora of options, even in a recession, but people struggle to find those who can guide them and point them in the right direction. This reminded me of exactly why we created my2be, to help these young people avoid the bullshitters and realise the opportunities available to them. Whilst this was a nice reminder, later in the week, I was hit with total validation of our ‘Why’. One of the features we have built is ‘Virtual Coffee’ where you can connect informally, with no further commitment compared to a mentor request. I received a Virtual Coffee request from a student that saw me speak at Manchester Entrepreneur’s What Next Conference. He was feeling a little lost, had a desire to start his own business, but didn’t have any idea how to start or which direction to take. We spoke via video call, and I could see parallels with my situation when I started out, and gave advice based on my experience. However, I could sense that there was still something holding him back, and when I asked, he very openly said he was simply lacking in confidence. Whilst there is no magic formula for confidence, I reassured him of what he’d already done, and gave a few pointers from what I had learnt. His reaction was incredible, a weight had been lifted, and I hadn’t performed any miracles, I just had experience that he could relate to, and this was absolute validation of our ‘Why’. We built this platform so that people can connect, build relationships, and help each other. We are striving to create the best possible experience and instances like this help us to remember why we started in the first place so we can help many, many more people now and in the future.
https://medium.com/@adam_18236/remembering-your-why-9cacf86cf60c
['Adam Mitcheson']
2019-04-12 08:53:56.762000+00:00
['Entrepreneurship', 'Education', 'Careers', 'Technology', 'Mentorship']
876
Mining Industry Continuous Advancements
Mineral reserves are naturally occurring accumulations or quantities of metals or minerals of adequate size and concentration that may have commercial significance under the right circumstances. Ore is a term used to describe economic quantities of metals or other mineral resources. Mineral products are naturally occurring mineral quantities that may be economically extracted. The distribution of mineral deposits is determined by the geological processes that formed them. Mineral deposits are therefore commonly clustered in geological provinces (mineral provinces or mineral districts) with some provinces being strongly endowed in particular mineral commodities. Check disclaimer on my profile. Global demand for metals has improved considerably over the past decade. Geologists are developing new approaches for studying ore deposits and discovering new sources. (1) What should be the effect of these actions taken by the mining industry on mining explorations? Let’s take a look at this article. Mineral exploration and development are investigative activities prior to mining. The rewards of successful exploration and development can be large, if a mineral deposit is discovered, evaluated, and developed into a mine. For a mining company, successful exploration and development lead to increase that value of the company. sponsored post. For a local community or nation, successful mineral exploration and development can lead to jobs — often well paying — that otherwise would not exist; to new infrastructure, such as roads and electric power supplies, that are catalysts for broader, regional economic development; and to increased government revenues that, in turn, can be invested in social priorities such as education, health care, and poverty alleviation. In order to satisfy the global need for mineral reserves, mining companies are coordinating to each another to improve the mining industry. (2) This is outstanding work from the mining industry! As a result, it is reasonable to anticipate that the mining industry will be in fine condition in the future! Let’s keep an eye on this company! Exploration and discovery of minerals are data-gathering practices. Mineral discovery and production, in this context, refers to a series of practices that gather data needed to locate mineral deposits and then determine whether or not they can be built into mines. The mining industry and the companies involved are just continuing to raise the standard! I’m excited to see how much this industry will progressed! Source 1: http://www.ga.gov.au/scientific-topics/minerals/mineral-exploration/deposits-events Source 2: https://www.miningnorth.com/_rsc/site-content/library/education/Mineral_Exploration_&_Development_Roderick_Eggert_Eng.pdf
https://medium.com/@pixiedust20/mining-industry-continuos-advancements-638d291caee8
['Pixie Dust']
2021-05-03 11:07:25.914000+00:00
['Stock Market', 'Finance', 'Technology', 'Mining']
877
Forget Steve Jobs and Bill Gates. Elon Musk Is Redefining Innovation
Forget Steve Jobs and Bill Gates. Elon Musk Is Redefining Innovation To innovate will no longer be just releasing another website or app Photo by Matt Ridley on Unsplash We have always looked upon Steve Jobs and Bill Gates as the most significant innovation gurus in the world. I mean, Steve Jobs’ Apple revolutionized the way we use our phones, creating the first smartphone along the way, and Microsoft established the widespread use of personal computers. Even nowadays we refer to a general computer as a PC. These men were the starting point of an entire technological revolution that happened (still happening?) since the ’80s, where people started using computers for personal use. This growth was even more noticeable when the modern Internet was made available in 1995. The following years brought new tools that made our lives easier almost every year. Windows was released, Google was created, Amazon was founded, PCs started to become cheaper and more people could afford to have one at home. At the moment, it’s hard to find someone without a computer and easy internet access in the developed world. Things changed a lot in 30 years. Image from pplware Computers and Smartphones Are the Past in Terms of Innovation The computer and smartphone industries are starting to stagnate. There is way too much competition for new companies to thrive, and the established players are hardly bringing something new to the table. I mean, foldable smartphones? Yeah, it’s cool to show to our friends and all, but it is not something that one was hoping for their phones to have. Don’t take me wrong: It is an engineering marvel. I wonder how they managed to make a display that can be folded in half. But the average consumer doesn’t care about the technical stuff. What can it do for me? The functional part of technology is what makes it either be a success or a disaster. We have plenty of examples of great technology that were never a success because it didn’t match the user’s expectations:
https://medium.com/illumination/forget-steve-jobs-and-bill-gates-elon-musk-is-redefining-innovation-1dce15c0495e
['Emanuel Marques']
2020-07-27 04:59:35.185000+00:00
['Technology', 'Innovation', 'Space Exploration', 'Future', 'Inspiration']
878
Can VR Help Tackle Unconscious Bias?
Immersive tech could be a helpful tool in addressing society’s pervasive problem of systemic racism. Technologies such as Virtual, Augmented, and Mixed reality — collectively referred to as XR — have long been touted as an “Empathy Machine,” and for very good reason. They enable us to easily change our perspective and experience what it’s like to “walk a mile in someone else’s shoes,” which makes them particularly well-suited for soft skills training. VR pioneers such as Nonny de la Peña explored these empathy-building possibilities in works such as “Across the Line” and there is a growing body of research looking at the impact of virtual embodiment on behavior. Some of the most salient examples of this include a study which indicated that convicted domestic abusers embodying a woman in an assault scenario will improve their faculty to recognize fear in a woman’s face, or that people embodying a man becoming homeless would be more likely to sign a petition to build social housing sent to them weeks later. Do virtual lives matter? Christophe Mallet, co-founder of BODYSWAPS — an immersive technologies innovation agency based in London — points to a study which showed that people who embodied a Black man in virtual reality were less likely to pronounce a guilty verdict in mock legal scenarios involving Black defendants and a limited body of evidence. “You can only ever be yourself. That’s part of the human condition. That means we have a natural tendency to dehumanise the ‘other’, to diminish their suffering, to negate their complexity. And that’s how we often justify the unbelievable amount of cruelty we inflict upon each other as a species,” Mallet adds. In the face of the important debates currently taking place following the tragic killing of George Floyd and the growing momentum behind the Black Lives Matter movement in the U.S. and around the world, this begs the question of what role technology can play in helping to address the monumental challenge of tackling systemic racism in society. “Everyone, including those in the technology industry, has a moral obligation to help tackle the problem of systemic marginalization and racism,” says Wendy Morgan, CEO and founder of Shift, which uses VR and AI-based training to allow people to recognize their own biases and gain tools to address it and track lasting change and progress. She says many people who have taken their virtual courses report that the experience has changed their lives and views. Yet although the technology industry has an opportunity to create unique and powerful tools that affect change, Morgan stresses that lasting change can only stem from long-term engagement with the subject through a variety of learning experiences and systemic change within organizations, including hiring practices, operating procedures and a careful examination of how each person is treated. A matter of perception In the notorious “Heidi vs Howard” case study, students at Columbia Business School were asked to rate their perception of the profile of a successful Silicon Valley venture capitalist. Half of the class received the profile with the name Heidi at the top, and the other half received a copy with the name Howard. The students rated the VC with the male name as more likeable, even though their professional qualifications were the same. This is a classic example of how even knowing a candidate’s name can dramatically bias processes such as recruitment. Many companies have now adopted a system where people charged with selecting and evaluating applications and resumes do not actually see the applicants name, but once those same candidates reach the interview stage, those biases surely reassert themselves. Could virtual recruitment practices be the solution? Imagine a world where, instead of meeting in person, applicants are represented by an avatar that levels the playing field and helps tackle and change enduring racist institutions and procedures. “With Bodyswaps, the VR soft skills training platform I co-founded, we allow people in embodied simulations to speak to virtual characters using their own voice then swap bodies to watch themselves back, effectively sitting across from themselves and learn by experience how they impact others. This creates unprecedented self-awareness and, with practice, builds the confidence to change one’s behavior,” says Mallet. Apart from sounding like quite an attractive option in these times of social distancing and remote working dictated by the global COVID-19 pandemic, this presents an opportunity to either bypass such issues of bias — be it gender, race, age, disability or sexual orientation — or at the very least highlight these in order to force us to confront them head-on, much as the Columbia University study did. Hacking unconscious bias Researchers have long been interested in the way we relate with avatars, and how these digital representations of ourselves affect our identity and perceptions of one another. So as we start spending more and more time in ever-more-realistic virtual worlds, does the ability to control the image we project to others during social interactions present an opportunity to bypass bias altogether? “VR is a fascinating medium because it allows you to embody someone else, to experience being this ‘other’. In VR, you step into a virtual body and evolve in a simulated world in a naturalistic way, it’s a machine to create lived-in experiences. And that new body of yours can be of any gender, race, age or type of disability,” explains Mallet. Morgan adds that her company has seen demand surge for their virtual training solutions, but this in itself can be a double-edged sword. “It makes me unbelievably sad that the continued loss of lives and the violent backlash is what has caused this demand to go up, however, now that there is a greater awareness of the problems that have plagued our world throughout all of history, it gives me hope that everyone is becoming aware of this issue and are reaching out for help and solutions. We must at all costs avoid racial tokenism and not just give lip service or check a box on this. People and organizations must be willing to do what it takes to make a lasting impact and change, and that takes time and help.” This article was originally published on Futurithmic Tech Trends offers a broad range of Digital Consultancy services to guide companies, individuals, and brands in effectively leveraging existing and emerging technologies in their business strategy.
https://medium.com/edtech-trends/can-vr-help-tackle-unconscious-bias-82a5fb80c988
['Alice Bonasio']
2020-09-02 17:11:13.486000+00:00
['Bias', 'Technology', 'Virtual Reality', 'BlackLivesMatter', 'Tech']
879
China is installing spyware app on tourists’ phones
China is the world’s most popular country in the world. It is a country that holds the record of having the largest population in the whole world. Every year tourists visit that country in a large number. But now, some news has spread regarding the tourists in China. The visitors to Xinjiang in northwest China are experiencing a surprise by a spyware app that is forcefully installed on their phones by the Chinese guards at the Chinese borders. According to the reports, the tourists are being stopped at the Chinese border in Xinjiang regions and then their smartphones are seized. The guards on the border take their phones into a separate room. The app is installed on smartphones there. For installing that app on the I phones, a hand-drive machine is used. <br /> The name of this app is Fengcai which is in Chinese language. Another name of this app is BXAQ. Fengcai is developed by a Chinese company named Ninjing FiberHome Starrysey Communication Development Company Ltd. And then distributed by Chinese authorities. This app is also called the Surveillance app. HOW DOES IT WORK? The motive of the Chinese government is just spying. When the app is installed on the phones, it collects all the personal information including text messages, call logs and also a list of installed apps from the phone and that data is sent to a remote server for reviewing. It extracts emails, contacts, etc and can be used for the tacking movements. It is reported that Fengcai checks the content against a list of 73,000 items regarded as suspicious or carrying further investigation. And also, the data taken into consideration also include, how to make illegal weapons, etc. The tourists have not been warned regarding this. Also, they have not been told what this software is looking for. It is not the first time when Chinese authorities have been caught using spyware to keep eye on people in the Xinjiang region, as this kind of intensive surveillance is very common in that region because many others steps have already taken by the Chinese governement for security. However, it is the first time when tourists are believed to have been the primary target. It is said that once the detection is over, this app is being uninstalled by the guards and then no information will be sent to their server after this. But sometimes the guards may forget to do so. So,whatever the intentions of the chinese government, but it may hurt the feelings of the tourists. So, what do you think about this? Give your suggestions in the comment box. CAMERA TRAPS We live in a time where Artificial intelligence is spreading day by day .Now, AI is turning towards the wildlife conservation. A security camera system named Trailguard AI has been developed. These are the infrared cameras which are playing a vital role in detecting the poachers when they attack the animals. And thanks to its small size that it cannot be seen by the poachers .But first, there was a problem with it that these IR cameras were not able to distinguish between the heat signature of humans and animals.So, for its improvement about 180,000 heat maps of people and animals were collected. Using this data, they developed a deep learning algorithm to recognize only human. This is called Systematic Poacher Detector(SPOT). This human detection algorithm run directly from an Intel-based processor in the camera head and this device consists of a battery having the life of about one year. The benefit of this SPOT is that it only detects the human and so the battery can be saved as it will not detect useless things. It detects the poachers attacking at the animals before they can cause harm to the animals. Once we become able to detect the poachers, we can build a system by which we can catch the poachers like using emergency alarms. This year this Trailguard AI system has spread across many national parks and conservation groups in Africa and has expanded to South America and Southeast Asia DRONES Except this new AI technology used for wildlife conservation, there is another way by which the endangered species are protected with the help of technology. An Astrophysics Research Institute at John Moores University in the city of Liverpool in England is working on a data collecting Computer-based system capable of identifying animals with the help of a drone. These drones have cameras attached to them. The whole data about the animals is collected and that data helps to understand how much the animal number is decreasing due to habitat destruction,poaching or other factors. According to the decreasing number, we can recognize the endangered species and then we can take steps to save them. But using these cameras attached drones are not up to the job. The problem coming with these drones is that when the pictures captured by the cameras on the drones are seen on computer, then it has to be decided that whether any animals are present or not and then counting and recording the results. Means, there can be many useless pictures captured by the cameras on drones. Therefore, we really need some improvement in this computer-based system But overall, our technology has done a lot for wildlife conservation and developing more and more in this field.It has proved our thinking wrong that the advancement in technology is causing harm to wildlife. So, What do you think about it? Can you suggest some other ways for wildlife conservation through technology? Let me know in the cmment box. Smartphones are now common in our life,they have become an important part of our life.But, Do you ever think about a foldable smartphone?.A phone whose display screen can be folded.Yes,foldable smartphones have arrived.The era of foldable smartphones arrived with Samsung’s first attempt of launching a smartphone with a folding screen. The foldable smartphone is considered as the best featured smartphone till now.It has given a name- Samsung galaxy fold. These smartphones will have a feature of bendable display and this will make the smartphone more interactive and interesting.It is the first 5G smartphone with the largest screen yet. The smartphone has a 7.3 inch screen that can fold over half.This new feature will make reading books,playing games and watching videos ,an absolute pleasure.The most important feature of this Samsung Galaxy Fold is that you can use upto 3 apps at a time on the screen. The main feature of the samsung galaxy fold is that it has two screens: one is outer screen and the other screen is revealed when you open up your phone . Galaxy fold has Wireless charger as well as it is designed to act as a wireless charger for other devices means it can share its power with other devices without any wire.There are many other features present in that smartphone which are not in others.You can know more this smartphone from here: https://pricebaba.com/mobile/samsung-galaxy-fold The Galaxy Fold opens smoothly like a book and closes flat with a click.It can act a phone as a well as a tablet means.Basically it has a large size as a tablet but if you fold, you can convert it into phone for its easy handling and for putting it into your pocket. In November 2018,Samsung talked about its foldable smartphone concept and it was expected to be launched on April 26,2019.But despite of the surity given by the samsung that its smarphone had been thoroughly checked,when it was put into the hands of the reviewers its screen get crumbled and cracked.But now,as said by the company,the problem has resolved,and now the phone will be launched sooner in the market.It will be available at the rate of $1,980 with a pair of samsung’s wireless earbuds and Slim Cover. The Galaxy Buds pair are for hands-free music and while the Slim Cover protects your phone. Along with samsung,there are companies like the HUWAI,XIAOMI,LG,TCL which are going towards launching the foldable smartphones. Samsung Galaxy Fold in India Samsung Galaxy Fold price in India is expected to be Rs. 140,790. Samsung Galaxy Fold Expected to be launched on Jun 11, 2019. it is expected to available in Space Silver, Cosmos Black, Martian Green, Astro Blue colour According to me,these smartphones are going to be interesting.So what do you think about it? Do you find this new technology interesting? Give me your suggestions. . Thanks for joining me! Good company in a journey makes the way seem shorter. — Izaak Walton When babies are born before 37 weeks of gestation,they are very fragile and premature, and not ready to be here.According to World Health Organisation,around 15 million babies are born premature every year. This number is increasing every year.and the result is the death of more and more infants.To solve this problem ,our researchers have developed Artificial Womb-a biobag containing Amniotic Fluid where the infants can continue their growth outside mother’s womb. These artificial wombs will prove useful in the situations when a baby is born premature and need more nourishment and time for developing.It is developed by the combined team of researchers from University of Western Australia and Tohoku University Hospital Japan . The Artificial womb consists of a biobag which has Amniotic fluid which the infants absorb as in a normal development.and there is a oxygenator from where the need of oxygen can be fullfilled. The heart pumps blood which flow through the umbilical cord and goes to oxygenator and comes back , in this way the blood gets oxygen. The researchers used eight lamb fetuses that were 105 to 115 days old,a level comparable to the 23 weeks old human fetus. It was seen that their brain and organs developed normally.The pinkish creatures opened their eyes ,they started fattening up and white wool grow on them.This experiment leads to the result that the premature child can be developed in the Artificial womb.I know you will be thinking about the machines used in hospitals in which the premature babies are put for more development.But I tell you that according to researchers,these machines demonstrate an arrest of lung development which results in the restriction of the lung function. But with the Artificial Womb Technology, the infants will grow without any restriction to lungs. The developers of AWT(Artificial Womb Technology) certainly hope to move from tests on lambs to human babies.This technology will let women reproduce without risking their physical, economic and social well being.No time needed for dealing with pregnancy related complications.But, it will take a lot of time because not everybody is agree with it.As some are saying that psycological emotions ties between mother and baby will be lost if suddenly there will be no physical connection between the two. Some have argued that it is just unnatural .They are saying that it is not the way God wants us to have babies All have their different opinions. Now let me know what do you think about this new technology Give your suggestions. Let me know in the comment box. We all use internet and Wifi connections daily .We have to do a lot of work with the help of internet.Without internet our daily life becomes very difficult .BUT, many times it happens that while using internet ,our connection suddenly get slow . . DO YOU GET ANGRY WHEN YOUR INTERNET OR WIFI CONNECTION SUDDENLY STOPS WORKING?? BUT,Why this happens?? Actually what happens,when we are using the internet connection,like when we are searching something on google or waiting for a message.then we are in a “prepare and hold” state,Actually we expect the speedy results,.So, when the connection suddenly get slow,then our positive expection get turned into tension,agressiveness,and frustration. Then the “Loading “ and “Wait for a while” factors make us angry. About 10 years from know,it took 15–20 minutes to boost a computer but today if we have to wait for only 10 seconds then we get angry and yell at the computer.DO YOU KNOW WHAT IS THE REASON OF ALL THIS?? The reason is our expection of immediate results.This shows that we people are psycologically dependent on digital technology.This is called Maladaptive Responses.Therefore when suddenly it stops working,we start feeling crazy and face these Maladaptive Responses. The findings have shown that when the digital technology suddenly stops working then people face FOMO.FOMO means Fear of Missing Out .Actually when the wifi or the internet connection suddenly get lost ,then people start missing the things that the other people enjoy when they are online. According to researchers,both the FOMO and Maladaptive responses have impact on our concentration and acheiving goals,resulting in poor performance. Almost everyone is facing these problems. Now the question is : How can we solve these problems? Some psycologists have given the answer of this question.They suggest that whenever you have to wait for connection ,instead of staring continuously at the loading screen,engage yourself in any task.This will stop you to get frustrated and angry.Also you can read some books or listen music so that you can remain Now let me know what do you think about this?? As we all are the internet users and we all face problems in our daily life related to slow internet.So, according to you what should we do to behave smartly and calmy in these situations? Let me know in the comment box
https://medium.com/@ritikakashyap678/china-is-installing-spyware-app-on-tourists-phones-9433556febf7
['Ritika Kashyap']
2019-07-10 11:40:25.185000+00:00
['Technology News']
880
State of the Market: Retail Banks in the ERA of the Connected Consumer
State of the Market: Retail Banks in the ERA of the Connected Consumer karthikeyan.v ·Dec 16, 2020 Infographic : State of the Market: Retail Banks in the ERA of the Connected Consumer The financial industry is dramatically changing and retail banking is no exception. In the age of digitally-empowered consumers, their changing behaviours, increasing expectations, adoption of new technology all prove that the traditional approaches used by banks are no longer a viable solution in the competitive environment. PwC predicts that customer intelligence will be the most important predictor of revenue growth and profitability for banks. So, what the future holds for retail banking? Download this free infographic to know the answers and more such interesting insights.
https://medium.com/@karthikeyan-v/state-of-the-market-retail-banks-in-the-era-of-the-connected-consumer-e9234b5437c9
[]
2020-12-16 15:26:54.149000+00:00
['Machine Intelligence', 'Artificial Intelligence', 'Banking Technology', 'Technews', 'Customer Engagement']
881
These are the features in ES6 that you should know
Learn functional React, in a project-based way, with Functional Architecture with React and Redux. ES6 brings more features to the JavaScript language. Some new syntax allows you to write code in a more expressive way, some features complete the functional programming toolbox, and some features are questionable. let and const There are two ways for declaring a variable ( let and const ) plus one that has become obsolete ( var ). let let declares and optionally initializes a variable in the current scope. The current scope can be either a module, a function or a block. The value of a variable that is not initialized is undefined . Scope defines the lifetime and visibility of a variable. Variables are not visible outside the scope in which they are declared. Consider the next code that emphasizes let block scope: let x = 1; { let x = 2; } console.log(x); //1 In contrast, the var declaration had no block scope: var x = 1; { var x = 2; } console.log(x); //2 The for loop statement, with the let declaration, creates a new variable local to the block scope, for each iteration. The next loop creates five closures over five different i variables. (function run(){ for(let i=0; i<5; i++){ setTimeout(function log(){ console.log(i); //0 1 2 3 4 }, 100); } })(); Writing the same code with var will create five closures, over the same variable, so all closures will display the last value of i . The log() function is a closure. For more on closures, take a look at Discover the power of closures in JavaScript. const const declares a variable that cannot be reassigned. It becomes a constant only when the assigned value is immutable. An immutable value is a value that, once created, cannot be changed. Primitive values are immutable, objects are mutable. const freezes the variable, Object.freeze() freezes the object. The initialization of the const variable is mandatory. Modules Before modules, a variable declared outside any function was a global variable. With modules, a variable declared outside any function is hidden and not available to other modules unless it is explicitly exported. Exporting makes a function or object available to other modules. In the next example, I export functions from different modules: //module "./TodoStore.js" export default function TodoStore(){} //module "./UserStore.js" export default function UserStore(){} Importing makes a function or object, from other modules, available to the current module. import TodoStore from "./TodoStore"; import UserStore from "./UserStore"; const todoStore = TodoStore(); const userStore = UserStore(); Spread/Rest The … operator can be the spread operator or the rest parameter, depending on where it is used. Consider the next example: const numbers = [1, 2, 3]; const arr = ['a', 'b', 'c', ...numbers]; console.log(arr); ["a", "b", "c", 1, 2, 3] This is the spread operator. Now look at the next example: function process(x,y, ...arr){ console.log(arr) } process(1,2,3,4,5); //[3, 4, 5] function processArray(...arr){ console.log(arr) } processArray(1,2,3,4,5); //[1, 2, 3, 4, 5] This is the rest parameter. arguments With the rest parameter we can replace the arguments pseudo-parameter. The rest parameter is an array, arguments is not. function addNumber(total, value){ return total + value; } function sum(...args){ return args.reduce(addNumber, 0); } sum(1,2,3); //6 Cloning The spread operator makes the cloning of objects and arrays simpler and more expressive. The object spread properties operator will be available as part of ES2018. const book = { title: "JavaScript: The Good Parts" }; //clone with Object.assign() const clone = Object.assign({}, book); //clone with spread operator const clone = { ...book }; const arr = [1, 2 ,3]; //clone with slice const cloneArr = arr.slice(); //clone with spread operator const cloneArr = [ ...arr ]; Concatenation In the next example, the spread operator is used to concatenate arrays: const part1 = [1, 2, 3]; const part2 = [4, 5, 6]; const arr = part1.concat(part2); const arr = [...part1, ...part2]; Merging objects The spread operator, like Object.assign() , can be used to copy properties from one or more objects to an empty object and combine their properties. const authorGateway = { getAuthors : function() {}, editAuthor: function() {} }; const bookGateway = { getBooks : function() {}, editBook: function() {} }; //copy with Object.assign() const gateway = Object.assign({}, authorGateway, bookGateway); //copy with spread operator const gateway = { ...authorGateway, ...bookGateway }; Property short-hands Consider the next code: function BookGateway(){ function getBooks() {} function editBook() {} return { getBooks: getBooks, editBook: editBook } } With property short-hands, when the property name and the name of the variable used as the value are the same, we can just write the key once. function BookGateway(){ function getBooks() {} function editBook() {} return { getBooks, editBook } } Here is another example: const todoStore = TodoStore(); const userStore = UserStore(); const stores = { todoStore, userStore }; Destructuring assignment Consider the next code: function TodoStore(args){ const helper = args.helper; const dataAccess = args.dataAccess; const userStore = args.userStore; } With destructuring assignment syntax, it can be written like this: function TodoStore(args){ const { helper, dataAccess, userStore } = args; } or even better, with the destructuring syntax in the parameter list: function TodoStore({ helper, dataAccess, userStore }){} Below is the function call: TodoStore({ helper: {}, dataAccess: {}, userStore: {} }); Default parameters Functions can have default parameters. Look at the next example: function log(message, mode = "Info"){ console.log(mode + ": " + message); } log("An info"); //Info: An info log("An error", "Error"); //Error: An error Template string literals Template strings are defined with the ` character. With template strings, the previous logging message can be written like this: function log(message, mode= "Info"){ console.log(`${mode}: ${message}`); } Template strings can be defined on multiple lines. However, a better option is to keep the long text messages as resources, in a database for example. See below a function that generates an HTML that spans multiple lines: function createTodoItemHtml(todo){ return `<li> <div>${todo.title}</div> <div>${todo.userName}</div> </li>`; } Proper tail-calls A recursive function is tail recursive when the recursive call is the last thing the function does. The tail recursive functions perform better than non tail recursive functions. The optimized tail recursive call does not create a new stack frame for each function call, but rather uses a single stack frame. ES6 brings the tail-call optimization in strict mode. The following function should benefit from the tail-call optimization. function print(from, to) { const n = from; if (n > to) return; console.log(n); //the last statement is the recursive call print(n + 1, to); } print(1, 10); Note: the tail-call optimization is not yet supported by major browsers. Promises A promise is a reference to an asynchronous call. It may resolve or fail somewhere in the future. Promises are easier to combine. As you see in the next example, it is easy to call a function when all promises are resolved, or when the first promise is resolved. function getUsers() { return fetch(" function getAlbums(){ return fetch(" function getTodos() { return fetch(" / todos "); }function getUsers() { return fetch(" / users "); }function getAlbums(){ return fetch(" / albums "); } const getPromises = [ getTodos(), getUsers(), getAlbums() ]; Promise.all(getPromises).then(doSomethingWhenAll); Promise.race(getPromises).then(doSomethingWhenOne); function doSomethingWhenAll(){} function doSomethingWhenOne(){} The fetch() function, part of the Fetch API, returns a promise. Promise.all() returns a promise that resolves when all input promises have resolved. Promise.race() returns a promise that resolves or rejects when one of the input promises resolves or rejects. A promise can be in one of the three states: pending, resolved or rejected. The promise will in pending until is either resolved or rejected. Promises support a chaining system that allows you to pass data through a set of functions. In the next example, the result of getTodos() is passed as input to toJson() , then its result is passed as input to getTopPriority() , and then its result is passed as input to renderTodos() function. When an error is thrown or a promise is rejected the handleError is called. getTodos() .then(toJson) .then(getTopPriority) .then(renderTodos) .catch(handleError); function toJson(response){} function getTopPriority(todos){} function renderTodos(todos){} function handleError(error){} In the previous example, .then() handles the success scenario and .catch() handles the error scenario. If there is an error at any step, the chain control jumps to the closest rejection handler down the chain. Promise.resolve() returns a resolved promise. Promise.reject() returns a rejected promise. Class Class is sugar syntax for creating objects with a custom prototype. It has a better syntax than the previous one, the function constructor. Check out the next exemple: class Service { doSomething(){ console.log("doSomething"); } } let service = new Service(); console.log(service.__proto__ === Service.prototype); All methods defined in the Service class will be added to the Service.prototype object. Instances of the Service class will have the same prototype ( Service.prototype ) object. All instances will delegate method calls to the Service.prototype object. Methods are defined once on Service.prototype and then inherited by all instances. Inheritance “Classes can inherit from other classes”. Below is an example of inheritance where the SpecialService class “inherits” from the Service class: class Service { doSomething(){ console.log("doSomething"); } } class SpecialService extends Service { doSomethingElse(){ console.log("doSomethingElse"); } } let specialService = new SpecialService(); specialService.doSomething(); specialService.doSomethingElse(); All methods defined in the SpecialService class will be added to the SpecialService.prototype object. All instances will delegate method calls to the SpecialService.prototype object. If the method is not found in SpecialService.prototype , it will be searched in the Service.prototype object. If it is still not found, it will be searched in Object.prototype . Class can become a bad feature Even if they seem encapsulated, all members of a class are public. You still need to manage problems with this losing context. The public API is mutable. class can become a bad feature if you neglect the functional side of JavaScript. class may give the impression of a class-based language when JavaScript is both a functional programming language and a prototype-based language. Encapsulated objects can be created with factory functions. Consider the next example: function Service() { function doSomething(){ console.log("doSomething"); } return Object.freeze({ doSomething }); } This time all members are private by default. The public API is immutable. There is no need to manage issues with this losing context. class may be used as an exception if required by the components framework. This was the case with React, but is not the case anymore with React Hooks. For more on why to favor factory functions, take a look at Class vs Factory function: exploring the way forward. Arrow functions Arrow functions can create anonymous functions on the fly. They can be used to create small callbacks, with a shorter syntax. Let’s take a collection of to-dos. A to-do has an id , a title , and a completed boolean property. Now, consider the next code that selects only the title from the collection: const titles = todos.map(todo => todo.title); or the next example selecting only the todos that are not completed: const filteredTodos = todos.filter(todo => !todo.completed); this Arrow functions don’t have their own this and arguments . As a result, you may see the arrow function used to fix problems with this losing context. I think that the best way to avoid this problem is to not use this at all. Arrow functions can become a bad feature Arrow functions can become a bad feature when used to the detriment of named functions. This will create readability and maintainability problems. Look at the next code written only with anonymous arrow functions: const newTodos = todos.filter(todo => !todo.completed && todo.type === "RE") .map(todo => ({ title : todo.title, userName : users[todo.userId].name })) .sort((todo1, todo2) => todo1.userName.localeCompare(todo2.userName)); Now, check out the same logic refactored to pure functions with intention revealing names and decide which of them is easier to understand: const newTodos = todos.filter(isTopPriority) .map(partial(toTodoView, users)) .sort(ascByUserName); function isTopPriority(todo){ return !todo.completed && todo.type === "RE"; } function toTodoView(users, todo){ return { title : todo.title, userName : users[todo.userId].name } } function ascByUserName(todo1, todo2){ return todo1.userName.localeCompare(todo2.userName); } Even more, anonymous arrow functions will appear as (anonymous) in the Call Stack. For more on why to favor named functions, take a look at How to make your code better with intention-revealing function names. Less code doesn’t necessary mean more readable. Look at the next example and see which version is easier for you to understand: //with arrow function const prop = key => obj => obj[key]; //with function keyword function prop(key){ return function(obj){ return obj[key]; } } Pay attention when returning an object. In the next example, the getSampleTodo() returns undefined . const getSampleTodo = () => { title : "A sample todo" }; getSampleTodo(); //undefined Generators I think the ES6 generator is an unnecessary feature that makes code more complicated. The ES6 generator creates an object that has the next() method. The next() method creates an object that has the value property. ES6 generators promote the use of loops. Take a look at code below: function* sequence(){ let count = 0; while(true) { count += 1; yield count; } } const generator = sequence(); generator.next().value;//1 generator.next().value;//2 generator.next().value;//3 The same generator can be simply implemented with a closure. function sequence(){ let count = 0; return function(){ count += 1; return count; } } const generator = sequence(); generator();//1 generator();//2 generator();//3 For more examples with functional generators take a look at Let’s experiment with functional generators and the pipeline operator in JavaScript Conclusion let and const declare and initialize variables. Modules encapsulate functionality and expose only a small part. The spread operator, rest parameter, and property shorthand make things easier to express. Promises and tail recursion complete the functional programming toolbox. Follow on Twitter and discover my new books!
https://medium.com/programming-essentials/these-are-the-features-in-es6-that-you-should-know-1411194c71cb
['Cristian Salcescu']
2020-06-06 09:39:31.649000+00:00
['JavaScript', 'Web Development', 'Technology', 'Programming', 'Learning']
882
How to Expose a Local Server to the Internet Without any Additional Tools
How to Expose a Local Server to the Internet Without any Additional Tools Expose a Local Server to the Internet Without any Additional Tools Most of the time Developers will have the scenario to access the servers running on local machines externally through the internet. In this tutorial, let us see how to expose the local server to the Internet without using any additional tools in the windows system. I am running an Apache Server on a Local machine on port number 8085 and the machine is connected to the internet through a WIFI router(Motorola) Configure Router By default the routers restruconfigure the router to allow the internet traffic to the specific internal port(8085). The Router management UI can be used to define the forwarding rules to forward the traffic from external IP’s to internal IP’s on a specific port. Open your default router URL, I am using Motorola router access the one of the below IP (for Motorola modems MB series): 192.168.100.1 (for Motorola modem/router combos): 192.168.0.1 The default username is: admin The default password is: motorola Click on Advanced Advanced Router →Forwarding Add new Forwarding Rule Identify the local network IP (ipconfig) Local IP Address — Local IP address identified from the previous step Start Port — 8085(Port in which the Server is running) End port — 8085 External IP Address — 0.0.0.0 (Allow from all external IP’s) Start Port — 80 (the external port used to access the server, configure 443 fro HTTPS) End Port — 80 Save the configuration Now the router accepts the traffic from any of the external IP addresses on port 80 and routes internally to the local IP address on port 8085. Configure Windows Defender Firewall Windows Defender Firewall helps to secure your Windows device by filtering the network traffic permitted to enter or exit your device. Let us now configure the Windows Defender Firewall to allow the incoming traffic on port 8085 Open Windows Defender Firewall Advanced Settings Define new Inbound Rule with Type Port Specify the Protocol(TCP) and Specific Local Ports (8085) Allow the Connection Enter a name for the rule and save the rule. Now the Windows Defender Firewall allows the incoming traffic on port 8085 Identify your public network IP — https://www.whatismyip-address.com/?check The local server on port 8085 is now accessible externally over the internet through your public IP with port number 80 (http://xx.xxx.xxx.xxx). Now you can even create a domain name to point to the public IP so that the servers can be accessed through domain name instead of direct IP. The router rules and windows Defender Firewall rules can be enabled to support different internal and external ports. This approach will help you to quickly access the local servers externally without enabling any additional tools. Ensure the rules are enabled with care, disable the rules while not in use to avoid exposing the local server to the internet unnecessarily.
https://medium.com/tech-learnings/how-to-expose-a-local-server-to-the-internet-without-any-additional-tools-ae49e6b8fe93
['Albin Issac']
2020-11-19 05:08:32.024000+00:00
['Software Development', 'Networking', 'Technology', 'Windows', 'Programming']
883
Ambrosus: The L1 Sensor was tested, Improved NOP scripts, Ambrosus Smart Pallet introduced, The One-Pager for Enterpreneurs
Hello, guys! Nice to see you in our Ambrosus update again! Not so many news have appeared during the past two weeks, but their Progress update depicts tangible development improvements. Efforts from the past months have begun to pay off specifically with regards to the Ambrosus L1 Sensor, which was tested by a large Korean company. Ambrosus team is aiming to build scalable distributed, community-driven IoT infrastructure, but they also work directly, or via intermediaries and partners, on securing adoption and integration of the infrastructure amongst the corporate and public sector stakeholders, in addition to the wider community, to build a diverse and truly decentralised ecosystem. Moreover, new and more stable updates of bridge validators have been implemented on the product environment. This allows the transaction processing speed to increase, and for less time lost in converting AMBs. Besides, the blockchain core team is currently implementing a number of fundamental changes within the Explorer to make the Atlas node management user-interface (UI) available to Ambrosus node owners in the near future. Ambrosus was proved convenient for enterpreneurs: One of the objectives of Ambrosus is to unleash the creativity and entrepreneurship and to permit anyone anywhere to build new projects and businesses at the intersection of blockchain and IoT. The Ambrosus Smart Pallet was introduced. It is a cutting-edge data collection and management solution characterized by its ability to intelligently monitor products on a pallet in communication with the pallet itself, and even between multiple pallets at the same time. They are moving forward on their roadmap, setting new milestones. Additionaly, Ambrosus team is planning some events and in the coming months of August and September, Ambrosus and ChipLess team members will be touring East Asia. The road to decentralization continues. The number of social media subscribers slightly decreased, but the community is still active — there are a lot of AMB-believers. Let’s believe together with Paradigm!
https://medium.com/paradigm-fund/ambrosus-the-l1-sensor-was-tested-improved-nop-scripts-ambrosus-smart-pallet-introduced-the-b95fa894dc10
[]
2019-07-15 21:27:19.154000+00:00
['Ambrosus', 'Blockchain', 'Cryptocurrency', 'Blockchain Technology', 'Cryptocurrency Investment']
884
5 Techno Trends Fuelled by COVID-19
Companies responded to COVID-19-related changes much quicker than they thought possible before the crisis. They had to speed up digitalization, accelerate supply-chains, introduce remote working and collaboration, respond to the increased demand for online purchasing and services, adopting advanced technologies. While before 2020, the implementation of advanced technologies in a variety of fields was expected to take 672 days, in 2020, on average, it took less than thirty. Some of the technologies are related to operation routines and business decision making, but some are directly connected with innovations and business core. Let’s take a closer look at the top-five techno trends which we saw in 2020 due to Covid-19. Face recognition is transforming into “eye recognition” The popularity of masks and barrier systems has created difficulties with technologies such as Face ID, which is seen as a severe security threat in some countries, especially in China, where face recognition is a common technology in airports, streets, shops, and hotels. To solve the problem, the Chinese company SenseTime introduced a new method of face recognition — it requires not more dots on one’s face to be seen, but more precise analysis of those dots. The new program recognizes faces just by seeing the eye area. It seems like soon, facial recognition will be almost entirely based on analyzing the iris of the eye. Patient care is taken to a new level of safety Robots provide wide range assistance in the fight against coronavirus. Most of them were not created from scratch during COVID-19, but adopted from other industries or developed on the base of existing models used for patient care. China and Thailand, for now, are the frontrunners in introducing robotic help to daily medical care operations. Robots measure body temperature, control the availability of personal protective equipment, and are used to deliver medication and food to patients. Also, some models are already equipped with UV-lights, disinfecting rooms and themselves. They are used as a shielding layer that physically separates a medical worker or a relative/caregiver from a patient. It lowers the risks endured by medical personnel and helps not exhaust the abilities of Covid-19 fighting facilities too much. Some robots are now responsible for stable video connection between patients and their relatives or medical personnel. Here, we are not speaking about simple messengers, but about movable robotic cameras, like Pepper the Robot, with the expanded functions tailored to the communicational needs and abilities of sick people. However, there is something robots can’t do for now — they cannot comfort patients on their own. At the current level of development robots are not capable of performing actions that require empathetic intelligence at the human level. Сlinics care about the safety and effectiveness of the treatment process itself and the accompanying infrastructure, of which nutrition plans are the essential part, as they are officially included in the treatment protocols for many diseases. We receive requests from medical institutions and senior living homes for a commercial model of our Moley Robotic Kitchen, as it automates the cooking process, and eliminates infection risks, partially thanks to the in-built cleaning and UV-disinfection. While the elder patients remain the major risk group in a course of current pandemic, it is a vital necessity to plan healthy, tasty, various meals for them, both in clinic and caregiving facilities. Projects like Moley Robotics aim to expedite innovations in these fields. Telemedicine goes from an underdog to a superstar Applications related to telemedicine saw up to 500% popularity increase this year. Thanks to the new technologies doctors can consult patients remotely, as well as track their further state through recorded and updated vitals. These and other applications are based on technologies that help to put a bridle on the Covid-19 spread. For example, the mobile application PEPP-PT allows you to trace the chain of infection. Users’ smartphones communicate using Bluetooth wireless technology. Anonymous information is sent to a central server, after which people who have been in contact with an infected person for more than 15 minutes over the past two weeks receive mobile notifications and recommendations. Robert Koch Institute displays information about the spread of the coronavirus on an interactive map. Owners of smartwatches and fitness bracelets can download an application that constantly diagnoses vital signs (heart rate, sleep, activity level). If any of the parameters change significantly, this will mean the onset of the disease. The secret for AI being efficient in the telemedicine sphere is more players in the industry sharing data for enhancing the machine learning algorithms used in this field. No matter how difficult the COVID-19 spread is for the medical care sector, it allows to collect invaluable data related to pandemic. In the future, these data sets will be analyzed and used to create more efficient reaction-protocols, assistance centers’ organization and diagnostics in general. Food industry gets more automated than ever As “Lockdown” was named a word of the year in 2020, the HORECA industry was one of the first to suffer. It gave an impetus to advancement both in food-related technologies and business models. At first, food delivery services became increasingly popular, and soon after that, cafes and restaurants felt a lack of professionally equipped cooking space needed to serve that level of demand. It forced the development of CloudKitchens that provides cooking space and needed infrastructure for cooks and restaurants who want or have to work with deliveries only. A necessity of so-called ghost-kitchens united under several trustworthy brands also came from the consumers’ concerns related to the safety of delivered meals and products. It is only logical that in 2020 the market saw a steep increase in sales of UV-sterilizers, both portable, used by households, and commercial. Delivery-model speeded up the every-day processes in the food sector, which required not only new safety measures, but also improvement in logistics. Now, companies around the world work on blockchain technologies, allowing more predictable and responsive supply chains, that rely more on bulletproof algorithms and less on manual planning and execution. Hunter Food Study Special Report, reflecting changes in every-day cooking during pandemic, show that more than 50% of Americans claimed to enjoy cooking and having family meals more than previously. More than 44% of the research participants claimed to discover new ingredients and brands, 60% are looking for simpler solutions and for creative usage of ingredients in hand, almost and 50% would like to eat healthier and try completely new foods. While different technologies are introduced in food and horeca industries to face the challenges of pandemic, Moley Robotic Kitchen answers top-demands of households and commercial kitchens. It minimizes human participation in the process to bare minimum, allows to taste a variety of dishes from all over the world, disinfects cooking area and air, and enhances sustainability of supply process, analyzing shelf life of products and minimizing food waste. XR-Technologies take over new spheres X reality is defined by Wikipedia as a form of “mixed reality environment that comes from the fusion (union) of ubiquitous sensor/actuator networks and shared online virtual worlds. In short — it is a new cross reality that received a huge impetus due to lockdowns in 2020. XR is a technological solution that covers all forms of computer-assisted reality, including augmented reality (AR), mixed reality and virtual reality (VR). It significantly improves the quality, accuracy and dimension of human-computer interaction. Covid-19 increased the number of people using VR-headsets and the number of companies implementing AR technologies to make customer’s journeys better and more fun. Businesses and governmental institutions use VR/AR to teach their employees faster and more efficiently — AR/VR programs simulate real-world situations, giving workers the opportunity to gain experience even when working from home. Museums and theaters adopt AR/R technologies to lure more “visitors” deprived of cultural experience due to the quarantine. Computing giants create and update VR-classrooms, helping schools around the world not to lower the level of education. For example, In China, during the first pandemic wave, 23.8 percent of all virtual and augmented reality devices were used in education. While not many schools and households can afford the whole VR-ecosystems, many VR-schooling solutions include a simpler browser version. COVID-19 outbreak is causing widespread economic hardship and uncertainty for consumers, businesses and communities across the globe. Pursuing technological innovation in the spheres related to fighting pandemic and its consequences becomes a perspective solution for many companies. Make sure you subscribe to our newsletter as you will be the first ones to know the date of our launch.
https://medium.com/@moleyrobotics/5-techno-trends-fuelled-by-covid-19-f09e387cd4c
['Moley Robotics']
2020-12-16 14:09:37.494000+00:00
['Analytics', 'Covid 19', 'Robotics', 'Technology', 'Moley Robotics']
885
4 Uncommon Python Tricks You Should Learn
1. Multiple Assignment When you want to give several variables the same values, you will often see code that repeats the assignment statement for each variable. It may look something like the following code, which uses one separate assignment per variable: a = 1 b = 1 c = 1 In Python, we can simplify this, and assign to every variable at once. a = b = c = 1 # a == 1 # b == 1 # c == 1 After doing this, every variable was assigned to the right-most value in the chain. Since it just takes the right-most value, we can also replace 1 with a variable.
https://medium.com/better-programming/4-uncommon-python-tricks-you-should-learn-2d3a156c10f2
['Devin Soni']
2020-02-12 17:24:54.423000+00:00
['Programming', 'Coding', 'Software Engineering', 'Python', 'Technology']
886
How I deployed my Java Springboot App with MYSQL in AWS for free
5. Deploy the spring application in cloud EC2 Clone the branch in git and switch to right branch. now at root directory where pom.xml exits run mvn package It should be success. So, .war file has been created. you can check it in target folder. Now run (replace with your own generated .war file) java -jar target/ecommerce-backend-0.0.1-SNAPSHOT.war Go to the public dns name of the EC2 and run your application with correct port and part. Next step Great your app is running in aws. But there is one problem, what happens when you want to change your code and redeploy ? Well this tutorial is not for production codebase, which you can learn here but I will give you my script to make it easy redeploying. Run sh run.sh With one command, you can redeploy your application again and your new code is live again. Works perfectly when you want to try out AWS, without going through the complexity of buildspec.yaml files. Also you can deploy multiple applications in same ec2 instance in different ports ! Resources https://bitbucket.org/ecommerce-webtutsplus/ecommerce/src
https://medium.com/javarevisited/how-i-deployed-my-java-springboot-app-with-mysql-in-aws-for-free-ec7a702e69a0
['Nil Madhab']
2020-12-13 01:41:09.689000+00:00
['Deployment', 'DevOps', 'Ec2', 'AWS', 'Technology']
887
Testing Sentry Node Architecture on Radix STOKENET
Disclaimer: Radix’ codebase currently does not support relaying connections to validators in order to correctly enable sentry node architecture. Running a validator in such a topology could lead to missed consensus rounds and corresponding loss of emissions — getting increasingly worse as more people use the pattern. It is strongly recommended by the Radix team that node-runners use a standard topology using a backup full node pattern to mitigate DDoS attack until a sentry node-like architecture is supported in the Radix node. Introduction: Radix is a new DeFi platform with seemingly unlimited abilities to scale. The multi-sharded distributed ledger applies a new Byzantine Fault Tolerant (BFT) consensus algorithm (“Cerberus”) and recorded an astonishing 1.4M tps in a proof of concept. On 28/07/2021 the network launched v1 of its “Olympia” mainnet, which utilizes an un-sharded version of the later to come sharded Cerberus codebase. Sentry Node Architecture is an infrastructure example for DDoS mitigation on validator nodes and is very successfully implemented by the majority of validators on a number of DPOS networks, including cosmos (& cosmos-sdk chains), binance smart chain or polygon. To divert possible direct attack vectors on validator nodes, multiple distributed, non-validating full nodes (sentry nodes) are deployed in cloud environments. These sentry nodes each establish a private, direct connection to the validator node (through VPC or static routing), enabling the validator to be hosted in a very restrictive environment. Validator nodes peering solely to sentry nodes can either be hosted in the same remote data center as the sentry nodes (less secure), or utilize private connections via vpn or direct routing to be run in a more secure environment of a private data center. basic sentry node topology. source: forum.cosmos.network Many well-established tendermint validators use this mixed architecture of private servers in regional data centers, supported by cloud nodes spread across several cloud service providers in different regions. If a sentry node fails or is brought down by a DDoS attack, new sentry nodes can easily be booted to replace the compromised node, making it harder to impact the validator. Furthermore, validator operators can utilize this topology in various ways, combining the power of physically accessible hardware (keystorage, hardware firewalls…) with high connectivity and availability of distributed data center nodes. Test: In our effort at CryptoCrew Validators to build the most-secure validator systems possible, we recently tested basic sentry node architecture on an active validator setup on Radix-stokenet (testnet). For this test we used a simple combination of 2 sentry nodes (non-validating full nodes with port 30000 enabled to the public for gossiping connections), as well as one validator node with port 30000 enabled exclusively for the private (internal) IP addresses of the sentry nodes — and with the node-software bound to it’s internal eth1-interface address. system monitoring Tests began with an average stake of 0.56 % voting power and were run for 2 days before increasing the voting power of the validator to 3%. The validator node seemed to behave fine with only one dropped consensus proposal opposed to over 2000 proposals made. Even during the stress-test with high validator-consensus- participation no further proposal got dropped. Radix Sentry Test 1: consensus proposals made vs. consensus proposals missed; sync difference Radix Sentry Test 1: bft metrics Problem: Currently Radix’ core code-base doesn’t support correctly relaying connections through full nodes. Validator nodes being run in such a topology will not allow any inbound connections. Instead, the validator node is making outbound connections to other peers in the network and that’s what’s being used for communication. Other nodes can’t connect to that validator (as initiator), but they re-use the outbound connection initialized on the validator’s side. Peer-connection output of another radix-stokenet node during the test. Conclusion: The only seeming way to avoid this is if sentry nodes are actually able to act as proxies in order to relay consensus messages to the protected validator node. In our test, our public full nodes were serving as private bootstrapping nodes, which is not actually fully serving the sentry purpose. To achieve “real” sentry node architecture there has to be a built-in solution in the node’s core-codebase. During our test, the Radix core dev-team has considered options for providing a more complete proxy solution, but did not disclose any concrete plans as for now. In the meantime It is strongly recommended that any node-runner incorporates a standard validator topology using a backup full node pattern to mitigate DDoS attack until a sentry node-like architecture is supported in the Radix node. Summary: It was very interesting to experiment with Radix’ network topology and we’re happy that we were able to acquire some valuable insight. Of course, our results lead to the conclusion that for our “Olympia” mainnet-validator we choose a standard topology, utilizing fully redundant double-oversized (32 GB RAM + 8 cores) nodes, hosted in Tier 3+ data centers.
https://medium.com/@ccvalidators/testing-sentry-node-architecture-on-radix-stokenet-abe0cc958848
['Cryptocrew Validators']
2021-08-06 14:16:16.518000+00:00
['Distributed Ledgers', 'Validator', 'Radix Dlt', 'Blockchain Technology', 'Cryptocurrency']
888
Satellite Orbits: Types and Definitions
In these times, humanity uses several different orbits to place satellites. Most attention is focused on the geostationary orbit, which can be used for “stationary” placement of the satellite over a particular point of the Earth. The orbit chosen for the operation of the satellite depends on its purpose. For example, satellites used for direct broadcasting of television programs are placed in geostationary orbit. Many communication satellites are also in geostationary orbit. Other satellite systems, in particular those used for communication between satellite phones, rotate in low Earth orbit. Similarly, satellite systems used for navigation systems such as Navstar or the Global Positioning System (GPS) are also in relatively low Earth orbits. There are countless other satellites — meteorological, research, and so on. And each of them, depending on its purpose, receives a “residence permit” in a certain orbit. Earth’s gravity and satellite orbits As the satellites orbit around the Earth, they slowly move away from it due to the Earth’s gravity. If the satellites were not orbiting, they would gradually fall to Earth and burn up in the upper atmosphere. However, the very rotation of satellites around the Earth creates a force that pushes them away from our planet. Each of the orbits has its own calculated speed, which allows you to balance the force of gravity of the Earth and the centrifugal force, keeping the device in a constant orbit and not allowing it to gain or lose altitude. It is quite clear that the lower the orbit of the satellite, the more it is affected by the attraction of the earth and the greater the speed required to overcome this force. The greater the distance from the Earth’s surface to the satellite, the lower the speed required for it to stay in a constant orbit. For a spacecraft orbiting at a distance of about 160 km above the Earth’s surface, a speed of about 28,164 km/h is required, which means that such a satellite orbits the Earth in about 90 minutes. At a distance of 36,000 km above the Earth’s surface, a satellite requires a speed of just under 11,266 km / h to stay in a constant orbit, which makes it possible for such a satellite to orbit the Earth in about 24 hours. Definitions of circular and elliptical orbits All satellites orbit the Earth using one of two basic types of orbits. Circular satellite orbit: When a spacecraft orbits the Earth in a circular orbit, its distance above the Earth’s surface remains always the same. Elliptical satellite orbit: The rotation of a satellite in an elliptical orbit means a change in the distance to the Earth’s surface at different times during a single orbit. Satellite orbits There are many different definitions associated with different types of satellite orbits: Center of the Earth: When a satellite orbits the Earth-in a circular or elliptical orbit — the satellite’s orbit forms a plane that passes through the center of gravity or the center of the Earth. Direction of motion around the Earth: The ways in which a satellite orbits our planet can be divided into two categories according to the direction of this rotation: Prograde orbit: The orbit of a satellite around the Earth is called an acceleration orbit if the satellite rotates in the same direction as the earth rotates; Retrograde orbit: The orbit of a satellite around the Earth is called retrograde if the satellite rotates in the direction opposite to the direction of rotation of the Earth. The track of the orbit: A satellite’s orbit path is a point on the Earth’s surface where the satellite is directly overhead during its orbit around the Earth. The track forms a circle in the center of which is the Center of the Earth. It should be noted that geostationary satellites are a special case because they are constantly above the same point above the Earth’s surface. This means that their orbital path consists of a single point located on the Earth’s equator. You can also add that the orbit of satellites orbiting strictly above the equator stretches along this very equator. For these orbits, as a rule, the displacement of the orbit path of each satellite in the west direction is characteristic, since the Earth under the satellite turns in the east direction. Orbital nodes Orbital nodes are the points at which the path of the orbit passes from one hemisphere to the other. For nonequatorial orbits, there are two such nodes: Ascending node: This is the node where the path of the orbit passes from the Southern hemisphere to the Northern hemisphere. Descending node: This is the node where the path of the orbit passes from the Northern hemisphere to the Southern hemisphere. Conclusion Humanity has not yet exhausted the possibilities of using near space to build communication systems for various purposes. It is expected that promising low-orbit communication systems will absorb new types of services such as remote sensing of the Earth, monitoring, etc., which will optimally balance satellite capabilities and bring low-orbit systems to a level of profitability that is not inferior to geostationary systems. The same applies to systems with highly elliptical satellites.
https://medium.com/zeba-academy/satellite-orbits-types-and-definitions-928dd43bf891
['Zeba Academy']
2021-03-03 10:47:48.468000+00:00
['Satellites', 'Earth', 'Science', 'Solar System', 'Technology']
889
[Full StReAming*) Texas 6 , Season 1 Episode 7 full Episode
Streaming Texas 6 Season 1 :: Episode 7 S1E7 ► ((Episode 7 : Full Series)) Full Episodes ●Exclusively● On TVs, Online Free TV Shows & TV Texas 6 ➤ Let’s go to watch the latest episodes of your favourite Texas 6. ❖ P.L.A.Y ► https://tinyurl.com/y9wk24ab Texas 6 1x7 Texas 6 S1E7 Texas 6 TVs Texas 6 Cast Texas 6 Online Texas 6 Eps.1 Texas 6 Season 1 Texas 6 Episode 7 Texas 6 Premiere Texas 6 New Season Texas 6 Full Episodes Texas 6 Watch Online Texas 6 Season 1 Episode 7 Watch Texas 6 Season 1 Episode 7 Online ⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package. ✌ THE STORY ✌ Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing (Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself. ✌ STREAMING MEDIA ✌ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content. Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”. This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation. ✌ TELEVISION SHOW AND HISTORY ✌ A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings. A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video. Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet. The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 1, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRATexas 6 CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets. ✌ FINAL THOUGHTS ✌ The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature. FIND US: ✔️ https://www.ontvsflix.com/tv/112994-1-7/texas-6.html ✔️ Instagram: https://instagram.com ✔️ Twitter: https://twitter.com ✔️ Facebook: https://www.facebook.com
https://medium.com/@tvsf-re-ek/full-streaming-texas-6-series-1-episode-7-full-episode-3ffd8a809af8
['Tvsf Re Ek']
2020-12-24 08:39:55.475000+00:00
['Politics', 'Covid 19', 'Documentary', 'Technology']
890
K-Means Clustering for Beginners using Python from scratch.
In this article, we will take a real-world problem and try to solve it using clustering. So let's get our hands dirty with clustering. Introduction: Cluster analysis is a multivariate statistical technique that groups observations on the basis of features or variables they are described by. Was that too boring ok let's try to understand this with an example. In the example given below there two figure, one on the left side has three clusters and this is done on the basis of geographic proximity, the first cluster shows the countries in North America and the second and third cluster shows the countries in Europe and Australia respectively. In the figure to the right, the clustering is done on the basis of the official language of the country. In simple terms we can understand this as observations in a dataset is divided into different groups and this is very useful. EXAMPLE K-Means Algorithm: The algorithm is very simple given data we first initialize seeds randomly. Then we go on calculating the euclidean distance of every point with every seeds. The one with the minimum distance becomes the part of the given seed. After each and every data is covered we place the seeds into the centroid of the clusters formed. And now that centroid is the representative of that cluster. When to use Cluster Analysis? This is one of the decision we have to take while dealing with problems. Taking decision is not a tedious task as it solely depends upon the type of data we are using. If we are using a labeled data we can use classification technique whereas in case when the data is not labeled we can cluster the data based on certain feature and try to label it on our own. So when we use cluster analysis we don’t have labels(ie..data is not labeled) in the context of machine learning this is called as unsupervised learning. Final Goal: The goal of clustering is to maximize the similarity of observation within the cluster and maximize the dissimilarity between the clusters. We will be achieving this goal very soon. Let’s dive into it. Choosing a problem Well to start of choosing a problem is the first and the foremost step of a data scientist. We should always choose a problem such that after solving it, the solution should benefit the end user. In this article we will take an example of market segmentation. There will be certain features due to which the market is segmented. We will try to analyse the the type of customers in the market based on the features. The data set consist of 30 samples and features are satisfaction and loyalty respectively. You can download the data set through this link: https://drive.google.com/file/d/1nxr5XYg4JrwB_EtdWdyIwCT4gr85xfzb/view?usp=sharing Lets Begin: Well I hope you have downloaded the data set from the link given above. Before beginning make sure you have jupyter notebook installed in you pc with anaconda package manager as we will be importing certain libraries. To install certain package through anaconda command prompt you just have to type the following command: pip install the package name you want to install. If you already have the package installed you are ready to go. Command to check the packages installed : conda list #1. Importing relevant libraries: First we will import certain libraries required for performing K-means Clustering. We will also look into these packages in details. 1. Numpy: It is a third party package that helps us to deal with multidimensional arrays. 2. Pandas: Allows us to organize data in tabular form. 3. Matplotlib: Helps in visualizing the numpy computation. 4. Seaborn: This also adds to the visualization of Matplotlib 5. Scikit-learn: This is the most widely used machine learning library. It has various functionality as in this case we are importing KMeans from it. Importing Libraries #2. Reading the data: In this step we will load the data set into the variable data using pandas data frame. There are thirty observations with features satisfaction and Loyalty .The data here is in .csv format. Remember pandas helps us to organize data in tabular form. READING DATA #3. Plotting the data: Now we will simply plot the scatter plot of the given data using. These are simple python code we will get accustomed to it once we start using it regularly. PLOTTING #4. Clustering: For the first section in Selecting Feature just ignore the title for now we will see it later. We are just creating a copy of our data and storing it in variable x. So now we will create a variable kmeans and by passing the argument 2 in KMeans we just said that we want to create 2 clusters. We don’t have to do anything this simple line of code will perform the kmeans algorithm and will create two clusters. It will classify our data into two clusters. You can see below the number of iterations that has been perfomed, here 300. This all runs behind the scenes. Python will take care of everything. CLUSTERING #5. Clustering Result: In this step we will again create a copy of x and store it in clusters. We will create a new column called cluster_predict which will have the value as predicted by our kmeans algorithm. Things will become more clear as we move ahead. CLUSTERING RESULT #6. Plot: Now we will plot the clustered data, note here we have two parameters/features here ‘Satisfaction’ and ‘Loyalty’. We can easily see the two clusters the one with all the red and the other with all the blues. But there is a problem. PLOTTING The Problem The biggest problem here is that Satisfaction is choosen as a feature and loyalty has been neglected. We can see in the figure that all the element to the right of 6 forms one cluster and the other on the left forms another. This is a bias result because our algorithm has discarded the Loyalty feature. It has done the clustering only on the basis of satisfaction. This does not give an appropriate result through which we can analyze things. Satisfaction was choosen as the feature because it had large values. So here is the problem both the data are not scaled. First we have to standardize the data, so that both the data have equal weights in our clustering. We can’t neglect loyalty as it has an important role in the analyses of market segmentation. PROBLEM (FEATURE SELECTION) #7. Standardizing the variables: We will not go in depth of this as sklearn helps us to scale the data. The data is scaled around zero mean. Now we can see that both the data are equally scaled and now both will have equal chance of being selected as feature. STANDARDIZING #8. The Elbow Method: Have we ever wondered why we initialized kmeans with 2 clusters only. Yes, we could have initialized it with any value we wanted we could have got any number of clusters. But the analyses becomes difficult when there are a large number of clusters. So how we will know the exact number of cluster to start off. Note there are no such exact number as it changes with the problem in hand. Here the elbow method comes handy when we are confused as to how may clusters do we need. What elbow method does is it starts of with making one cluster to the number of clusters in our sample and with the kmeans inertia value we determine what would be the appropriate number of clusters. Remember our goal- Our final goal was to minimize the within the cluster sum of square and maximize the distance between clusters. With this simple line of code we get all the inertia value or the within the cluster sum of square. ELBOW METHOD #9. Visualizing the Elbow Method: This graph looks like elbow and we have to determine that elbow point. Here the elbow point comes at around 4 and this our optimal number of clusters for the above data which we should choose. If we look at the figure carefully after 4 when we go on increasing the number of cluster there is no big change in the wcss and it remains constant. Hurrah..!! we have got the optimal number of clusters for our problem. We will now quickly perform the kmeans clustering with the new number of clusters which is 4 and then dive into some analysis. ELBOW METHOD VISUALIZATION #10. Stronger Clustering: This is a simple code which perform clustering with 4 clusters. Here there are four clusters so our whole data is categorized into either 0,1,2 or 3. #11. Plotting the newly cluster: #12. Analysis (The final step): Through the given figure following things can be interpreted: 1. The purple dots are the people who are less satisfied and less loyal and therefore can be termed as alienated. 2. The red dots are people with high loyalty and less satisfaction. 3. The yellow dots are the people with high loyalty and high satisfaction and they are the fans. 4. The sky blue dots are the people who are in the midst of things. The ultimate goal of any businessman would be to have as many people up there in the fans category. We are ready with a solution and we can target the audience as per our analysis. For example, the crowd who are supporters can easily be turned into fans by fulfilling their satisfaction level. ANALYSIS Footnotes I hope I was successful in writing my first blog it took me around 4 long hours. Also please forgive me if I have missed something. I am not that good in writing I am just getting used to it. Hopefully, I will be back with another one. Thanks for reading..!!
https://medium.com/code-to-express/k-means-clustering-for-beginners-using-python-from-scratch-f20e79c8ad00
['Ankit Prasad']
2019-03-22 06:33:59.248000+00:00
['Machine Learning', 'Blogger', 'Neural Networks', 'Writer', 'Technology']
891
This is why smartphone specs don’t matter as much as you think
If you’re thinking of buying a new smartphone, you have that in common with 1,5 billion other consumers this year. For the environment, this is bad news. Not only does smartphone production contribute to massive CO2 emissions, it also leads to the fastest growing waste stream worldwide: E-waste — So it’s worth considering if it’s really necessary to buy a new phone. The first thing that many people are drawn to is the specs of the phone. That is the camera, chipset, memory, operating system; the list goes on. ‘Good’ specs justify the price range from low end to high end and also are selling points — why, after all, would you buy a phone, if it didn’t add anything to what you already have? Here is the catch. There is a reason why marketing campaigns now zoom in on details like the number of cameras on a smartphone (why do you need 4?). Most users are not benefitting from the incremental changes that are made every year anymore and increasingly, big companies struggle to distinguish themselves from each other. Over the last years, there has been a trend towards convergence of technology. Even mid-range devices now provide far more CPU power and memory than most users will need. The same goes for cameras or any other part of the phone. For the job that the majority of people actually buy smartphones for — almost all of them are now good enough for it. Taking pictures, using apps, making phone calls and navigation, all of it is possible, in a very smooth way, even on a low-end phone. But this convergence goes deeper than just making phones that provide very similar quality. They are also almost identical when you look at the technology behind their functions. There are only a handful of players that supply practically the entire market. Let’s look at two examples. Most chipsets that are used in smartphones today are produced by one single company, Qualcomm. That is either directly or through licensing. The chipset of the phone will determine almost anything that a phone can do. You could argue a smartphone is actually a portable chipset with a case and sensors on it. It’s not just that the chipsets on most phones are sufficient in power to run the device smoothly; it is also that the chipsets are identical. Arguably they are sold in different ‘flavors’ with different CPU power, but in the end, you get the same technology with the same limitations and strengths. The same goes for software. Google’s Android will now run on almost any device, with iPhones being the exception. The different versions of Android are almost identical on new phones. At the moment, that is Android 9. You get slightly adapted versions of it, but in the end, the technology is the same. It will look the same, you have the same functions and access to apps and even the differences between different versions of Android is relatively neglectable. Creating needs rather than serving them The trend behind the scenes of this industry seems counter-intuitive: The more smartphones have become similar, the more we are told that they are different — and this is still working for tech giants. Tech-sites are rigorously comparing and ranking data on differences, for which you need specialized knowledge to be able to describe to what extent the effects it produces are exactly ‘different’. There is now fierce competition on cameras, for instance, where you still have some differences between brands. It may be my own inability to pay attention to the tiny details that are being advertised, but I cannot describe in any language that is available to me why this matters so much. To me, they look very much the same. Shown here is a camera comparison between three different mid-range devices. Only one of them is hailed to “produce magic”. Can you tell which one is which? We are being directed to pay attention to incremental changes while the use-value of smartphones is very similar. A lot of what is sold as a material benefit is actually just psychological. Specs are not the guiding factor you’d want them to be. Rather than obsessing about the minor details that still distinguish smartphones, it would be more honest to say that most smartphones are now identical. Steve-Jobs-style product releases try to hide the elephant in the room: there is almost no innovation taking place at the technological level. The smartphone in 2019 is pretty much the same as it was 2018 and it will be the same in 2020. That’s a good thing. It means that, If you have a smartphone and it works, you’re probably well off ignoring all the talk about specs and innovation for a while and hold on to it.
https://medium.com/fairphone/this-is-why-smartphone-specs-dont-matter-as-much-as-you-think-4ad2cf3c7507
['Fabian Hühne']
2020-02-17 16:05:54.536000+00:00
['Smartphones', 'Consumerism', 'Marketing', 'Technology', 'Innovation']
892
Designing for a hackathon
Written by: Annie Xu Each year, students from around the world are eager to participate in hackathons and look forward to an exciting weekend filled with learning experiences, and a healthy dose of competition. Hackers can have very different impressions of each event- starting from the very first social media post. No matter what we were told growing up, people do judge books by their covers. Design helps represent the identity of the organization and its values as well as what hackers can expect from the event experience. All of this is done through ✨ branding ✨. Defining the brand identity is just one of the many aspects the Design team is responsible for on Hack the North. Designing visual assets, user experiences, and even wayfinding signage are just some of what falls under our purview. Throughout all of these projects, there are 3 pillars our design team carries into the design process: Designing with intention 🔬 Designing collaboratively and efficiently 🤝 Designing for inclusion and accessibility 🌎 Designing with intention 🔬 How can we make our website more accessible? What narrative should we build with our brand? What information goes first on the sponsorship package? When designing for a big event, we often have to make difficult decisions and hope that the decisions we make are objective. In reality, we sometimes get caught up designing in a bubble where we let our biases influence our decisions. As a design team, we make a conscious effort to be intentional with our design decisions and be well informed about our community. When it comes to each design project, there are two things to always keep in mind: 1) What’s the desired outcome?, and 2) Who’s our audience?. When branding Hack the North 2020++, our goal was to simultaneously encompass the event’s direction while staying identifiable as the event our hackers all know and love. Specifically for 2020, we wanted to emphasize our main mission: To make it easy for anyone to dream big and build. Our audience includes passionate and innovative students from around the world. Each year we reach out to our hackers to ask for feedback on the event and their experiences, which helps us better understand the community we’re designing for. What do our hackers value? What truly makes their experience special? When designing the brand, we must consider what makes our hackathon unique to our audience. In order to push for inclusivity last year, we designed big and bold empowerment posters for the event. Since our hackers value these unique experiences, we wanted to showcase them on our landing page for 2020.
https://hackthenorth.medium.com/designing-for-a-hackathon-f3025c8aa4df
['Hack The North']
2020-08-07 16:10:14.296000+00:00
['Hackathons', 'Accessibility', 'Branding', 'Technology', 'Design']
893
Casual Conversation: The Evolution of the Dating App
Image Credit: Alexander Sinn via Unsplash There are days in our adult lives that just have more meaning than others. Days that we look back on and think fondly of. These days could be for personal advancement or where a good time was had. For me, a day that will always resonate with me is June 29, 2019. This was the day where one of my best friends got married and I had the privilege to be one of his groomsmen. This was one of the greatest experiences in my adult life, one filled with so much happiness and joy. Yet it was not a feeling that I ever thought I would experience in my life. Throughout my life, I have been to many weddings. Weddings here in the United States and some overseas when I lived in Jordan. As I am not much of a dancer, as my friends have constantly pointed out to me, they never really seemed like my cup of tea. Every time I would receive a wedding invitation I would dread having to respond to it, but when I was asked to be a groomsman I thought that this will be a different experience. And it was a great one at that. As a year has passed since that date, I think about my friend’s relationship and marriage. He met his wife like a lot of people met their spouses these days: through a dating app. Their success story got me to thinking about how the landscape of dating apps has changed over the last decade from a shameful way to meet people to a new way to connect with people. The transformation of a service like Tinder is quite fascinating. The How We Met Lie Image Credit: Helena Lopes via Unsplash Back in the late 2000s, there was an indication that the world was going more and more digital. Social media websites and apps were all the rage and quickly becoming people’s preferred method of communicating and connecting with people. The rise of services like Facebook, Twitter, Snapchat, and Instagram was so meteoric that it has permanently shifted the way we handle online media in the years to follow. This rise of a digitally connected life would soon seep into the way that we date, though the acceptance of it would be slow and gradual. There were services such as eHarmony that were founded early on in the 2000s that saw this coming before others. But at the time, smartphones were not ubiquitous and the subscription structure of the service painted it as a service for older single people looking for “the one” as opposed to a younger generation that was focused on more casual dating. Services like Zoosk, OkCupid, and Grindr are eventually released and there is finally an answer for those looking for a casual connection that could turn into more. Best of all these services were free, meaning that the cost of entry was merely the time needed to fill out a profile. These services have been very successful in gaining users and eventually offering a paid tier for those looking for advanced features. Yet there was one critical issue with all of these free dating services: online shame. Not many were happy to admit that they met online (here is a great article that details the psychology behind this shame). A sitcom in that era, How I Met Your Mother, had an episode that addressed this very idea. The protagonist Ted meets a woman while playing World of Warcraft and is urged by this woman not to mention the true nature of their meeting but instead to fabricate a romantic story about meeting in a cooking class. This slice of pop culture was very indicative of the way that many people felt in the late 2000s and early 2010s about meeting someone online. At the time, the prevailing notion was that online dating was for those people that could not have any success meeting people in more conventional ways like at a bar or a park as the movies described to us. A person was borderline pathetic if they met someone this way, and the only solution that made any sense was to lie about where the relationship started. This all was set to change in 2012 as an app called Tinder hit the app stores of smartphones. Swipe Culture Image Credit: Yogas Design via Unsplash 2012 was a big year for online dating. Two of the giants of the industry today were founded in this year: Hinge and Tinder. It is Tinder, however, that is mostly attributed to the shift in perception about dating apps. The foundation of this shift is ana addictive mobile-only idea: swiping. The concept of Tinder is very simple, swipe left if uninterested and swipe right if interested. This created a bit of a mobile game in a sense. To the point that some people went as far as to call Tinder the “swiping game”. In the years that would follow, almost all dating apps adopted some sort of swipe mechanism to match two people. This game-like swiping set the stage for when a match happened (when two people swiped right on one another). Since the matching process was so simple and largely superficial as Tinder did not require a detailed profile to be filled out like eHarmony, Match, or OkCupid, the interactions were very casual. So much so, that Tinder became associated as a “hooking up” app more than a dating app. People were matching, messaging, and then meeting for a casual night that may lead to sex. This happened at such a high rate that it became normalized. Tinder wasn’t like all of those other dating apps that tried to promise a lifetime of happiness and finding your soulmate. Tinder helped you find people, and what became of the encounter was up to the user. It is this difference in approach that led to a different perception of Tinder. Where a traditional dating app was panned by people for trying to recreate the soulmate search, Tinder was embraced for being a tool for fun and living life, an idea that appealed to the younger demographic that Tinder targeted. This helped to normalize the idea of Tinder, so much so that people felt better about admitting that they met someone on the app. The app had cemented itself as part of the popular social consciousness of people in their 20’s and early 30’s. This success, like in all industries, has led to copycats. Bumble, for example, is in many ways a copy of Tinder with the twist that women must message first on its app. Apps that were around before Tinder have adopted a similar swiping user interface. After all, this is what the people have indicated that they wanted, and these companies are trying to cater to the people above all else. Staying Power Image Credit: Scarlet Ellis via Unsplash Tinder was successful in shifting the market and perception of how we date in the age of the smartphone. This company became a household name with countless people, as my friend mentioned earlier, meeting their spouses through the app. The company had success in reshaping the way that we dated. That meeting someone on the internet was not something to be ashamed of, but rather something incredibly normal. After all, in recent years, so much of our daily lives have shifted to an online model. We shop, bank, and are entertained through the lens of a screen. So why not the way that we date and find love? Tinder, Hinge, and Bumble have ushered in this new reality and have gained millions of users in the process. The next step for a company like Tinder? Monetization. Much like eHarmony before it, services like Tinder and Hinge have relied on a premium subscription to maintain profitability. So what does a premium Tinder account give a user? Unlimited swipes to start, and other features such as Passport (that allows matching across the globe) and Super Likes for added visibility on the app. Bumble and Hinge have offered similar subscriptions as a means of gaining revenue from their most loyal subscribers. These companies have also monetized their apps through ads throughout the swiping experience. What has resulted in a situation where dating apps have become a place of nuance. A place where casual dating encounters can live symbiotically with those that are looking for a dedicated and committed relationship. The idea of a dating app has become so normalized to us as a society that dating in other ways can sometimes feel foreign. Quite often when asking a friend where they met their new significant other, hearing anything other than “on Tinder” or “through a dating app” can feel strange. The way that we date has been digitized and revolutionized, for better or for worse. Which begs the question, is this the peak for a service like Tinder? Or is there another step to take? Social Media Evolved Image Credit: George Pagan III via Unsplash Some might balk at the idea of a dating app taking itself more seriously. After all, the number of internet posts and stories of unsolicited genitalia pics are something of a legend. The Tinder subreddit is littered with these stories more so than any success stories. This has been the double-edged sword of swipe dating apps. While the at-times casual nature is endearing it also opens up the portal of platform abuse. Yet it would seem that Tinder, Bumble, and Hinge look at all things considered as a net positive and are looking for the next evolution. This next evolution is the idea of a virtual hangout. The idea on the surface is simple and very smart upon deeper analysis. At its most basic level, a dating app is a way for people to find one another. The job of the app is to use its algorithm to help the user find these people. What these people do after this matching point is entirely up to the user. Taking this beyond dating is the next chapter of a dating app. And while this may seem insane on the surface, consider that Bumble has already introduced its swiping feature to find friends and like-minded professionals in addition to finding love. Tinder has introduced Passport for all users to find people to talk to around the world during the COVID-19 pandemic. This feature is less about dating and more about people connecting. That is the endgame of these applications. Taking the idea a step further, think about the current state of popular social media platforms. Apps like Facebook, Twitter, Tik Tok, Snapchat, and Instagram are filled with toxicity and negative arguments. It seems that daily people are unfollowing or blocking one another at an alarming rate. Dating apps have an opportunity to be the antithesis of this negativity. Like a breath of fresh air, an alternative to what has been accepted as the norm. Will they be successful? Time will tell and they have their reputation issues to overcome. It would seem then that dating apps are doing a bit of growing up and maturing at this point. Moving beyond romantic relationships to human interaction. A shift that makes a lot of sense in an industry that traffics in human emotion much more than others do. It has become normal to see people getting married that met online. Perhaps sooner rather than later we will say the same about friendships and business partnerships that were started on Tinder or Bumble. The dream of these companies is for the dating app to become the connection app. Time will tell if this dream is realized or laughed off into the atmosphere.
https://medium.com/curious/casual-conversation-the-evolution-of-the-dating-app-6f9f2b2607bd
['Omar Zahran']
2020-08-13 23:16:23+00:00
['Networking', 'Dating', 'Social Media', 'Lifestyle', 'Technology']
894
What’s the catch? Traceability for responsible seafood supply chains
The new Netflix release — Seaspiracy — has taken the internet by storm as it tackles the many problems plaguing our oceans. While some have criticized the documentary for generalizing the issues and using outdated studies, Seaspiricy highlights the very real harm unethical fishing practices cause for people and our planet. A report from the Environmental Justice Foundation found that there were “cases of slavery, debt bondage, insufficient food and water, filthy living conditions, physical and sexual assault and even murder aboard fishing vessels from 13 countries operating across three oceans.” These human rights abuses can often be hard to track, as the vessels are far out in the ocean and rarely come to shore. In an article from the Future of Fish, they discuss the need for greater supply chain transparency in the seafood industry, noting that while many large industry players are aware of the human rights issues, they did not believe it could be happening within their supply chain. Companies like Wal-Mart, Costco, and Whole Foods have been found selling seafood caught using forced labour and more companies are realizing the need to take immediate action to protect their supply chains from human rights violations. As governments, corporations and not for profits fight to fix these issues, there is no doubt that technology will play an important role. Peer Ledger’s MIMOSI Connect traceability platform supports the shift towards greater transparency in the seafood industry. Our blockchain enabled technology gives companies a trusted, immutable record of transactions and metrics across their entire supply chain to support responsible supply chain management and due diligence. MIMOSI Connect allows companies to capture and track transactions and important metrics to instantly map and monitor their supply chains. For companies seeking to get serious about traceability within their supply chain, MIMOSI Connect provides the proof to ensure that your practices and values stay aligned. References
https://medium.com/@peerledger/whats-the-catch-traceability-for-responsible-seafood-supply-chains-56774d9b4e9a
['Peer Ledger']
2021-04-13 16:03:32.905000+00:00
['Blockchain Technology', 'Sustainability', 'Oceans', 'Seafood']
895
Polkadot Gifting Data
The festive season is well and truly upon us and blockchain isn’t missing out on the fun gifting madness this year! Following the Polkadot announcement that you can gift DOT or KSM to friends or family, many have taken advantage of this and given the gift of a digital asset this year. What’s great about this feature is that you can send the gifts to anyone, even if they don’t already have an account or wallet. This allows you to overcome one of the biggest challenges for any blockchain network which is the onboarding process. Thus, you may not only be gifting a digital asset but also an invitation into the wonderful decentralised world of cryptocurrency. As Christmas is merely days away, we’re having a look into the gifting data to see just how generous people have been this time of year. All in all there was a total of 1,364.3 DOT (US$37,791.10) and 18.4 KSM (US$5,100.30) gifted. The luckiest single gift recipient received 5.1 KSM which is over $1,400 USD and contributes to over a quarter of the total amount of KSM gifted. The average gift amount was nothing to scoff at with 3.4 DOT (US$94.20) and 0.3 KSM (US$83.20). As of now, there are still gifts worth over 178 DOT and 0.77 KSM out there unclaimed, maybe Santa got a little lost? Across Polkadot and Kusama there were a total of 457 gifts sent, with one popular claimer having received 13 KSM gifts. It’s clear from the graph that the gifting ramped up following the initial announcement from Polkadot on the 13th of October, with the peak number of gifts being sent on the 20th of October where 85 gifts were sent that day. In fact, more than a third of the total number of gifts that have been sent thus far were sent during the week-ending the 24th of October where 172 gifts were sent. What’s interesting to see is the number of gifts not yet claimed is sitting at 84. Perhaps these will be ‘opened’ come Christmas day and we will see this number drop rapidly. Of the 373 gifts which have been claimed, the average amount of time taken to claim was just over 1 day (26.1 hours). Most gift recipients have claimed much faster, but one DOT recipient is still waiting to claim their gift of 12 DOT back from the 15th of October (over 67 days ago!). So if you’re still in the market for a last minute gift for your loved one, why not send them the gift of Polkadot? Follow the steps listed on the Polkadot Gifting article and get your Christmas presents sorted so you can focus on what’s most important — the food! About SubQuery Network SubQuery is Polkadot’s leading data provider, supporting an indexing & querying layer between Layer-1 blockchains (Polkadot) and decentralized applications. SubQuery’s data service is being used by most of the Polkadot and Kusama crowdloan and parachain auction websites live today. SubQuery’s protocol abstracts away blockchain data idiosyncrasies with the SubQuery SDK, allowing developers to focus on deploying their core product without needlessly wasting efforts on custom backend technologies. ​​​​Linktree | Website | Discord | Telegram | Twitter | Matrix | LinkedIn | YouTube Appendix
https://medium.com/@subquery/polkadot-gifting-data-f4af051c4b8a
['Subquery Network']
2021-12-23 07:47:45.684000+00:00
['Technology', 'Polkadot', 'Blockchain', 'Gifts', 'Crypto']
896
3 Key Elements that will Make or Break Your Digital Ad Campaign
So, you’ve crafted your content and documented your strategy… Now, how do you ensure a successful campaign set-up? We’ve run thousands of content campaigns, and while we recognize that each one has distinct objectives, we’ve discovered that no matter the industry, there are three elements that significantly impact the success of your digital ad campaign. Make the most of your media dollars by following these best practices. 1. HEADLINES Despite the age-old wisdom of David Ogilvy, headlines continue to be hastily thrown together without the time & thought they deserve. Your headline is your first impression, and there are no second chances. 2. IMAGES Our brains process visual information about 60,000x faster than text. Images, therefore, not only enhance your content, they catch the interest of readers who scroll through mountains of content every minute. 3. DEVICE SPLITTING As a savvy digital advertiser you’ll want to make sure you’re considering diverse user experiences and traffic behaviour across desktop, mobile, and in- app, and ensure you’re optimizing performance by creating separate digital advertising campaigns for each. Following these simple best practices to maximize the success of your digital ad campaigns. To get started with StackAdapt, Request an Invite here!
https://medium.com/stackadapt/3-key-elements-that-will-make-or-break-your-digital-ad-campaign-eb7676c27289
['Maggie Clapperton']
2018-01-22 16:45:37.496000+00:00
['Content Marketing', 'Advertising Technology', 'Programmatic Advertising', 'Digital Marketing', 'Digital Advertising']
897
BUILDING A TECH BUSINESS AND INVESTING IN A BLOCKCHAIN FUTURE WITH MICHAEL HYATT
Michael Hyatt on the Speaking of Crypto podcast “The companies I will suggest in the next 2 to 3 years that are going to make it are ones that have a real blockchain product with real utility that people really need and they’re getting real quality advantage out of, like any other company. And 2017 was the year of FOMO which is ‘I’m going to miss out so I better buy it’ 2018 is like ‘woah what happened here, hold on here I think we have to go find something real’ and 2019 I think is going to be the year of prove it. ‘Prove you’ve got something. Oh, you’re going to raise a coin? I’ve heard that story. OK what is it really that you’re doing?”. Michael Hyatt, Co-Founder of BlueCat and Blockchain Advisor to Polymath Michael Hyatt has built incredibly successful tech business and he’s been in business since before the days of the internet. He’s got first-hand knowledge of where the internet and computer technology came from and educated and experienced insight as to where blockchain tech is going. https://twitter.com/mhyattspeaker Michael Hyatt, keynote speaker So where is it going? Michael talked about a creative destruction phase or a nuclear winter, much like we’re seeing now. But he believes that the future is bright. He thinks “the Facebook of cryptocurrency hasn’t been born” but that we’ll see real businesses with important use cases coming to the forefront after the fallout from the hype around cryptos and all the betting on the promises of ICOs that have never delivered, and won’t. He talks about the crypto space mirroring the real world. People talk about there being a new paradigm, but Michael doesn’t buy it, not were businesses and investments are concerned. He says it all comes down to the fundamentals. Is there a viable business? Are these the people who can deliver what they’re promising? And is there something backing the crypto or blockchain investment that has real value. “If you’re starting a company, whether it’s a crypto company or any kind of company, I don’t think it matters. I think what matters is that you have to be in it for 10 or 15 years and build to win not build to sell or build to flip.” He says when you’re buying part of an ICO, you’re buying a Kick Starter, that really, you’re buying a promise that it’s going to be useful, but that very few companies have delivered on their promises. With Ethereum, he says that something big has to happen. When Ethereum can be used as part of some revolutionary tool, an app that changes the way things are done in the financial world or the medical world, if it can change the way were doing things now then the value will really go up. But, whether it’s Ethereum or some other blockchain technology that transforms the way we’re doing things now, there needs to be broad based adoption and a global understanding of the new technology’s value. Then and only then will there be the Crypto or Blockchain Facebook or Google or Apple. Michael mentions the collapse of the condo market in Miami as a historic example that may be similar to what we’re seeing the crypto right now. Condos that were overpriced, plummeted in cost, but there was a point when buyers and investors saw that condos still held a certain value. So, while he doesn’t believe Bitcoin is digital gold. He does believe there is a value and that the public will figure out what it is. I ask Michael about his affiliation with Rotman’s Creative Destruction lab, which he is completely impressed by, saying that he’s in a place where he’s surrounded by big thinkers and intelligent innovators. https://www.creativedestructionlab.com/ Michael talks about being an advisor to Polymath. He’s excited about security tokens and believes in Trevor Koverko. https://twitter.com/trevorkoverko He also believes in the idea of tokenizing securities like art, paintings, wine, or buildings that have a that have an inherent value and supports the idea of these securities, that aren’t big enough to go public, can also draw in non-accredited investors who would like to put their money into something, but maybe don’t have the financial wealth to be able to pour in large amounts of money in order to be able to see a return on their investment. One of the other topics we hit on is regulation. His point of view is that regulation makes the market real. It’s there to stop companies from lying and cheating people and that regulation entering the picture is a healthy thing. Essentially the regulators will sort and sift through what’s being offered and cut out the crap. Michael is also a regular contributor to The Pitch Podcast. https://player.fm/series/series-1451959
https://medium.com/speaking-of-crypto/029-building-a-tech-business-and-investing-in-a-blockchain-future-with-michael-hyatt-cbe56f47fcc1
['Shannon Grinnell']
2018-11-11 21:08:33.734000+00:00
['Blockchain', 'Cryptocurrency', 'Token Economy', 'Blockchain Technology', 'Investing']
898
5 Ways to Write Cleaner Code Quicker
Photo by Sarah Dorweiler on Unsplash Cutting corners to meet development deadlines is common. But it will always come back to bite you. At some point, someone will come across what you have done and have to negotiate their way around it. They might spend two hours trying to decipher the logic, or, more commonly, they might just build on top and leave someone else to deal with it. This is how projects die. Not through intentional sabotage or consistent abuse, but the evolution of a few shortcuts. Don’t expect deadlines to change, no matter how unreasonable they are. Stakeholders don’t care about you. Find ways to write clean code quickly. The following rules will help with that. Use TDD Photo by Green Chameleon on Unsplash TDD stands for test driven development. It’s a method whereby you write tests before logic. Consider the following pseudocode for a function. if inputNumber = 0 return inputNumber else if inputNumber = 1 return inputNumber * 2 else return inputNumber * 10 To implement this using TDD, you would first write a simple test with a simple assertion. For example, we can see this function should return 0 if we pass 0, so we would write a test like the following (note: I’m using JavaScript for this, but it works in any language). Now, you would write logic to pass the test. The simplest way to do this is to make the function return 0. Since the test is passing, we can take the next assertion, i.e. the function should return 2 if the input is 1, and write a test for that. This breaks because we’ve built the logic to only pass the first test. So now we write the minimum code required to make both tests pass together. Repeat the previous process for the final assertion, where the function should return any other input multiplied by 10. It’s a good idea to write two or more assertions for something like this to increase comprehensiveness. Make it pass again without breaking the other tests. And done. You’ll notice that we’ve changed how the function handles the 0 value. This is an important part of TDD. Every time you write code to pass another test, you refactor it. TDD can be applied almost always, and is actually better for situations more complex than this. TDD: Ensures all your logic is covered, because you only write what is needed to pass the test Practically writes the logic for you, because you break it down into simple, manageable steps, rather than trying to think of it all at once Acts as clear documentation for your code It is one of the best ways to write clean code quickly. Avoid nested logic Photo by Edvard Alexander Rølvaag on Unsplash Here’s some pseudocode for a function that calculates redundancy pay. if employee's age < 20 redundancyPay = 0 else if employeeRole = management redundancyPay = employeeTerm * (annualSalary * 0.05) else if employeeTerm <5 redundancyPay = employeeTerm * (annualSalary * 0.02) else if employeeTerm > 10 redundancyPay = employeeTerm * (annualSalary * 0.04) else redundancyPay = employeeTerm * (annualSalary * 0.03) And here’s a way we could write it. You’ve probably seen code like this, and to be fair, it reads close to English. It’s also a relatively simple example. But imagine more realistic code, where you might have to mix in API calls or more complex logic. You can imagine this getting out of hand quickly. The best solution is to avoid code like this altogether, i.e. commit to one level of nesting in a function. Anymore, and you break it out into a new function. Consider the following refactored code. This code is cleaner and much easier to understand. It also wouldn’t take any longer to conceive of this code than it would the first example if you knew from the start that you didn’t want more than one layer of nesting. You might encapsulate all of these functions inside a file which exposes a single entry point. You can also test all of these functions individually, which will make your tests simpler, because they’ll only be dealing with a single part of the logic rather than the logic as a whole. You might notice that I compressed the four parameters from the first example into a single employee object. This leads to the next rule. Prefer functions with one parameter and one responsibility Photo by Fabrizio Verrecchia on Unsplash A function with multiple parameters is intimidating. There’s also usually a positive correlation between arguments and complexity. Sometimes you can’t avoid having more than one parameter. But in most cases, you can probably do one or both of the following: Create a data type to encapsulate related parameters Rethink how many responsibilities your function has Strongly-typed languages are great for the first point. In general, well defined custom data-types are a surefire way to make your program easier to understand and easier to maintain. But even with languages like Python and JavaScript, you should still prefer objects over separate properties when the data is related, because this at least puts a label on the data as a whole and helps readers understand how everything comes together. But encapsulating parameters into one data type isn’t the right way forward if your function is also doing more than it should. Consider the following example, where we are building an employee record from some form input. The calculateEmployeeFinancials function is doing multiple things. You might argue that this technically is a single responsibility of ‘calculating financials’, but it can still be broken down further. Generally, breaking your functions down as far as they can go is optimal, and takes no more effort than the alternative if you have it in mind from the beginning. It makes your code easier to understand and easier to test. Here’s a refactored solution. This technique shines when you have more complex things happening, but even here it makes sense. These functions now only have one responsibility, and it’s easy to quickly identify exactly what is going on. The buildEmployee function could perhaps be broken down too, but there will be times when you need to do a bit more complex logic. The idea is to break it down as much as you can and then put the pieces together like a puzzle to build your more complicated processes. Be wary of fancy tools Photo by Obi Onyeador on Unsplash This one may be controversial. Most languages have succinct ways of writing things, but often the succinctness can come at the cost of clarity. Consider this if statement. This code spans 6 lines, and it looks a little clunky. But it is clear. Now, consider this written as a JavaScript ternary expression. This only takes up one line and it looks fancy. But it’s really not clear. There is a cognitive load that comes with an expression like this that you just don’t get with the regular if statement. Ternaries aren’t bad. They can be good in very simple situations, like: But any more complex than that and you’re playing with fire. Favour procedural programming Photo by Maxime Guy on Unsplash Custom data types are great, but when you attach behaviour to them, they become objects. This is when things get complicated. Objects are instances of classes. Classes are collections of properties and methods. Classes can own objects of other classes, which lets them invoke behaviour on other objects in the program. This already sounds complicated, but it gets so much worse. When you have a bunch of objects flying around in your program, the data flow begins to look something like a spider’s web. An object might call out to another object, which then calls to another object, which calls back to the original object, which calls to a different object, which modifies some data on another object, and as a result creates an object which then goes off and makes an HTTP call. And so on. When you have situations like this, it becomes very difficult to follow the flow of a program. Debugging involves trying to visualise winding virtual paths like in a labyrinth. And then there’s inheritance. Eugh. Procedural programming has none of these problems. Procedural code simply runs step by step in a clear sequence. This doesn’t mean you can’t call other functions, but there is no concept of some other ‘entity’ like an object taking control of the execution or whatever else. It’s just step by step, all the way through. It’s much easier to follow, much easier to read, and much easier to debug. Here’s a great video about this. Don’t watch if you’re an object-oriented enthusiast. Or do, actually.
https://medium.com/javascript-in-plain-english/5-ways-to-write-cleaner-code-quicker-2c7d6b3617b9
['Lee Mcgowan']
2020-11-18 09:58:30.678000+00:00
['Coding', 'Programming', 'Software Development', 'Clean Code', 'Technology']
899
TikTok: China’s loss is Texas’ gain
TikTok new headquarters could be in Texas (Souce: Reuters) JEC AUSTIN — Texas is being dubbed Silicon Hills, the next Silicon Valley of the country, and there is a good reason for it. Apple’s announcement in 2019, about the construction of a new campus that would be building all-new MacPros, was followed by the high profile announcement of Tesla’s Cybertruck factory in Austin in 2020. This spate of tech good news for Texas has been followed by another good turn in fortune. TikTok’s Global Reach Most users of online services today have heard of the Chinese video-sharing social networking app called TikTok. Known in China as Douyin, it is owned by a company called ByteDance and can be used to create short videos of 3 to 15 seconds that can include music, lip-syncing, and dancing. While most popular with the Gen Z crowd, the app has expanded its reach to other age groups and has gained popularity in the U.S. TikTok Under Scrutiny Its rising fame and Chinese ownership made President Trump and the U.S. regulators take notice of it. It was considered a national security threat by the Trump administration as they believed that TikTok could share data about its American users with the Chinese government. TikTok continued its negotiations with the U.S.government and Josh Gartner, a TikTok spokesman for TikTok operations in the US said, “Even though we strongly disagree with the administration’s concerns, for nearly a year we have sought to engage in good faith to provide a constructive solution” (Isaac et al. 3). TikTok was displeased about government interference in private business and even though it cooperated with the Trump administration for a while, it became increasingly critical. Trump issued an executive order on August 6, 2020 that took effect on September 20, 2020, banning any transactions within the app. On August 6, 2020, another executive order was issued that made ByteDance, TikTok’s parent company, divest their U.S. assets and surrender data gathered from users in the U.S. within 90 days. US Contenders for TikTok sale This left TikTok with no choice but to seek a sale of its U.S. operations to an American company. With over 100 million regular users in the U.S., there were quite a few contenders, with Microsoft and Oracle emerging as the frontrunners. With some confusion surrounding it, a deal was reached in September where Oracle and Walmart would have a combined 20% stake in a new company called TikTok Global with headquarters in the United States. Even though it is called TikTok Global, the main users are from the US and data will only be collected from US members by TikTok Global. Of the five board members for the company, four would be American. Oracle executive vice president Ken Glueck said, “Americans will be the majority and ByteDance will have no ownership in TikTok Global” (Horowitz 12). As soon as President Trump gave his blessing to the deal, Texas Governor Greg Abbott tweeted that he had talked to the president about housing the new TikTok headquarters in Texas. In fact, President Trump said shortly after that “all the technology will be housed here. They are probably going to move to the great state of Texas” (Rozenzweig-Zwiff 2). Oracle has deep roots in Texas and a sprawling campus on the south shore of Lady Bird Lake(Austin, TX). If the deal goes as planned, the addition of TikTok headquarters to Texas would be a huge win for the Lone Star State. With a booming economy and other tech ventures that have set up shop in Texas, the demographic and economic landscape of the state is set to change. Sources: https://www.apple.com/newsroom/2019/11/apple-expands-in-austin/ https://www.cnn.com/2020/09/21/tech/tiktok-oracle-walmart-explained/index.html https://www.theverge.com/2020/7/22/21334860/tesla-cybertruck-factory-austin-texas-location-model-y https://www.texastribune.org/2020/09/22/tiktok-texas-trump/ https://www.statesman.com/business/20200919/trump-says-he-approves-tiktok-deal-with-possible-texas-headquarters https://economictimes.indiatimes.com/news/international/business/tiktok-to-challenge-trumps-crackdown-in-court-amid-rising-dispute/articleshow/77699743.cms
https://medium.com/international-junior-economist/tiktok-chinas-loss-is-texas-gain-e3c6d24ffe8f
['Ari Sharma']
2020-11-23 17:23:05.874000+00:00
['Tik Tok', 'Technology', 'China', 'Texas']