Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
1,000
NOZZLE ANDBUILD PLATE
NOZZLE: Now we’re getting to mention the nozzle component, both the inside and the outside. Nozzles are mostly made from brass because it’s easier to machine and excellent at transmitting heat. Usage of brass also has its limitations. The qualities that make them easy to produce mean that they’re soft enough when processing abrasive materials. The materials will then decline the nozzles over time. The material exiting widens the opening because it moves out of the nozzle. The function of the nozzle is two-fold. It mounts into the thermal block and extends the melt chamber, and typically most of the melted material is within the reservoir inside the nozzle’s tip. It also limits the amount of material that comes out and mechanically influences the placement of plastic on the bill pipe. Nozzles and thermal blocks are generally suited to specific filament diameters. But the method works better if you’re using the proper internal hot in diameter to match your chosen filament. The closer it holds to the filament, the more precision you’ll have in controlling the amount of extruded plastic. The most important factor while selecting a nozzle is the size of the opening diameter. The most common size used is 0.4 millimeters. 0.4 millimeter may be a good compromise between the dimensions of the 2 commonest diameters of filament, in terms of pressing down without producing an excessive amount of backpressure, and the width of the little bit of the plastic that’s extruded from a 0.4-millimeter nozzle is somewhere pretty on the brink of 0.4 millimeters, maybe a touch bit less counting on the material, and offers enough X, Y resolution to satisfy most prototyping needs. As there are more materials and more applications for 3D printing than ever before, there’s now a variety of nozzles available. Some machines ship standard with a 0.5-millimeter nozzle or wider, typically for printing large objects, and nozzles as small in diameter as 0.25 and 0.15 millimeters are available for very fine detailing. A wider or smaller nozzle doesn’t change the accuracy of placement in the X, Y plane, but changes the diameter of the bit of plastic leaving the nozzle. Certain materials are better processed with particular nozzle diameters. For example, filaments containing wood or metal powders can clog more easily with really small openings. The powder content might pile up onto the tip of the nozzle and block extrusion. Other materials are more viscous and need an excessive amount of force to exit a little opening effectively in any case. There is a variety of various terms for features of internal geometry within the nozzle. I will use the terms external tip, cone, shoulder, and throat to describe the space while the material travels from the melt chamber to the outside. As the material moves forward, it shrinks down for matching the width of the filament to the width of the nozzle diameter. This area is called the cone. It’s essential for guiding the forward pressure while extruding the fabric, ensuring that the plastic exits the basketball shot a line and not at an angle. The external tip establishes both, the outer diameter of the extruded material and also features a shape of its own. There’s a flat ring surrounding the opening and it has an additional mechanical function. It irons down any material that has risen upon the top surface of the print. The shoulder may be a flat internal ring round the throat. The throat is that the tiny tube that extends from the sting of the cone and bends the external tip of the nozzle. A throat isn’t typically very long. But with some materials, it can be helpful while evening out the pressure as the material emerges. In the early desktop printers, the nozzle was a hard and fast part of a hot end. source: https://www.researchgate.net/publication/339153639/figure/fig1/AS:857192693497856@1581382072553/Schematic-and-nozzle-size-of-the-PORIMY-3D-printer-The-outline-dimension-is-380-390_Q640.jpg BUILD PLATE: Now we shall see about the build plate which is in fact one of the most important parts of a 3D printer. After the fabric leaves the nozzle, it lands on your build plate. The build plate may be a surface on which you build your part layer-by-layer. The build plate is that the final step within the extrusion system. The aspect of the build plate that factors into the extrusion system, maybe a build plane. This is the layer on which extruded material lands. When a print starts, the build plane is that the same because of the surface of the physical build plate, but because the build progresses, the build plate rises moving up higher and higher, becoming the topmost layer of the pair part. The important things to debate once we will mention the difference between the build plate and therefore the build plane, are material adhesion, cohesion, warping, elephant feet, and heated build plates and surfaces. For this whole process to succeed, the fabric must stay where you set it. This means that it has a stick to adhere to something. In order to possess an honest first layer adhesion and accurate parts, you would like to form sure that the build plane is correctly calibrated in order that where your nozzle is driven across the XY plane by the motion mechanical subsystem, it remains the same vertical distance from the build plane, everywhere and also the build plate, ie, the top surface of the plate itself, where you deposit the primary layers of the part. When a print starts, these two flat planes should be precisely parallel to each other, with just the first layer of plastic keeping them apart. When you successfully bring these two planes into alignment, the machine makes an accurate model of where the platform and gear head are located. We always get confused by the term bed leveling. We don’t care if the machine itself is on a level with the world. We actually only care about the relationship of that nozzle, its range of motion, and its distance from the build plate just below. Another term for the CNC milling and machining world that’s an accurate metaphor, is tramming. When a CNC mill trams the highest surface of a piece, it’s cutting away the fabric across the plane to ensure that the worth understood by the CNC about the work plane is perfectly reflected within the physical world, also because of the digital world. Sorting out this element first is really more valuable than most of the opposite adhesion strategies because when the build plate is badly calibrated, the probabilities rise considerably, that your part fails, and fall off. Cohesion describes materials sticking to materials, through chemical or mechanical processes. Let’s talk about the adhesion strategies you can implement in your design. The most aggressive solution is named a raft. In your design, you’ll account for one or more layers of material set down flat on the plate, and you’ll then use this as the base for the rest of your part. It finishes up looking sort of a platform you would possibly see beneath a sculpture. Think of a sculpture of a horse. You only have the hooves of the horse touching down on the bottom. If it doesn’t stick well, then the whole horse is going to be lost. The material from your print will stick with itself more easily than other materials. So by employing a raft, you’re likely ensuring a pleasant, sturdy object is fixed down. While it serves an identical function to a raft, providing a bigger base for sitting the printed part, it is also solving the extra problem of cooling and warping. As the fringe of the part is exposed to the air and cools, sooner than the remainder of the part, it shrinks slightly moving far away from the print. By using a brim, you’ve delayed the cooling and warping of the important parts of your object. Some operators will model a raft into their designs, but we will use our 3D control software to automatically generate a raft. That’s the most common. When you produce a raft during this fashion, the way the layers are printed is going to be unique to every raft, guaranteeing that it sticks to the plate and may also be pulled far away from the bottom of the thing. After you’ve finished printing the part, you ought to be ready to easily tear off the brim. A skirt is produced in an equivalent way as a brim, but the layers are offset far away from the bottom of the thing and don’t inherit contact with the final printed part. While this does not necessarily appear to be it might help with adhesion, by printing the skirt first. Without the skirt, the first layers might not adhere as properly. A skirt also can help visually guarantee that the build plate has been properly calibrated to the nozzle. If the skirt is not of even height, getting thinner and thicker in different places, then you should stop the print and re-calibrate your build plate. It’s sort of a test drive that you simply take around the block if the block is that the build plate. Mouse ears were an early strategy in 3D printing, almost like a brim. A series of flat disks were incorporated into the planning around the base of the model. After the part was printed, you’ll easily tear the discs far away from the finished product. While less frequently used than other solutions, having this feature available when designing are often helpful, especially if you encounter just one tricky edge that you simply got to fix. We will now move on to the physical measures you can take during the actual print job to ensure proper adhesion. Let’s start with the heated beds. While using a heated bed, you have a heated surface that is not hot enough to melt the printed part, but it’s warm enough to keep the base material a bit flexible and prevent it from cooling and pulling away from the plate. Common materials for heated bill plates include aluminum and borosilicate glass. For a heated build plate strategy to work, you need to have a heating element in your 3D printer and a temperature monitoring solution to maintain the perfect temperature for your part. One of the setbacks from heating the plate too aggressively is named an elephant’s foot. An elephant foot error is when the temperature is high enough, that the rock bottom layer of the print has melted and expanded around the object. It looks a lot like an elephant’s foot, wide at the base and tapering off at the edges. Not all machines have a heated bed, and it’s possible to have a good adhesion by providing a better base of material that sticks down regardless of temperature. The simplest way is to offer your prints something to grip. If your surface doesn’t offer quite enough grip on its own, then you can apply adhesives. Some people like better to spray adhesives down on the plate rather than rolling it on with a glue stick or printing down PVA material. You need little or no PVA glue to carry the part down because PVA tends to stay pretty much to most build surfaces, and most printing materials. When you use this together with a heated bed, PVA glue offers another advantage. It helps evenly transmit the heat to the base of the part, and a twisting or levering action causes it to lose its grip, making it easier to remove your part at the end of printing. There are other glues in use, either specifically created for the sector of 3D printing or for other industrial applications. One such which is used extensively is 3M blue painter’s tape. The reason that the 3M Blue painter’s tape works so well is two-fold. The adhesive is often removed without leaving a residue, and therefore the top surface of the tape has particularly good micro fishers, making it a perfect grippy material for decent plastic. Image of a build plate source: https://hackster.imgix.net/uploads/attachments/1014924/_i2VLLbHVpp.blob?auto=compress&w=900&h=675&fit=min&fm=jpg
https://medium.com/@gonellaaditya67/nozzle-andbuild-plate-e5147e222074
['Aditya Gonella']
2021-12-15 07:07:47.241000+00:00
['Innovation', 'Technology', '3D Printing', 'Additive Manufacturing', 'Tech']
1,001
Using Artificial Intelligence in Trading
Artificial intelligence (AI), which is also called machine intelligence, is a wide-ranging branch of computer science focused on building smart machines that can perform tasks, which normally uses human intelligence. In fact, AI enables machines to learn from experience, adapt to new ideas and execute human-like tasks. This advanced technology has been used to power self-driving cars, voice-activated assistants and, now, trading. Traditional trading strategies use technical analysis conducted by humans, to assess investments and detect trading opportunities. This is done by analysing statistical trends collected from trading activities, like price movement and volume. However, when a technical analysis relies on a person’s ability to understand trends and patterns, it leads to a number of issues including human errors that may occur, biases and the limited amount of data a single person would be able to absorb. AI-based algorithms enable the analysis of more and deeper data, since this is required to train deep learning models because they learn straight from the data itself. With the use of machine learning, new ideas can be generated, and data can be collected from an unlimited number of articles, chatrooms, and other sources. This inspired the creation of Red Fort Exchange, a 360-degree Global Crypto Exchange and Trading Platform, which introduced AI-based algorithms meant to solve problems. It is the first platform that enables new traders to participate in crypto trading safely and without much hassle.
https://medium.com/@rfexchange08/using-artificial-intelligence-in-trading-6a09a6b4ed50
['Red Fort Exchange']
2020-02-05 02:44:44.526000+00:00
['Bots', 'Digital', 'Artificial Intelligence', 'Red Fort Exchange', 'Technology']
1,002
Osedea takes Lisbon: How playing hooky in Portugal has improved our bottom line
I’m passionate about talent management and culture. My objective is to make Osedea the best workplace for our team. Follow
https://medium.com/osedea/osedea-takes-lisbon-how-playing-hooky-in-portugal-has-improved-our-bottom-line-6a7ab282f0e1
['Ivana Markovic']
2020-09-09 15:03:21.625000+00:00
['Company Culture', 'Travel', 'Team Building', 'Human Resources', 'Technology']
1,003
How Science is Making it Possible to Eat and Pet a Chicken at the Same Time
Perspective Piece How Science is Making it Possible to Eat and Pet a Chicken at the Same Time Photo by Daniel Tuttle on Unsplash. At age 11, something changed. And it wasn’t that I got a Hogwarts invite from Professor McGonagall. Instead, I started feeling differently about animals. I created a fictional land called Evergold, where all the animals were safe and happy. Here, the fields and forests abounded with meat plants. Picture a willow, swaying in the wind, with sizzling hot bacon strips oozing from its branches. In Evergold, there were no animal farms or slaughterhouses. Even the moose and wolves could be friends, meeting up for peaceful games of tag. Today, startups creating cultivated meat are on the cusp of a great achievement for humans and animals. By growing meat directly from cells, they’re paving a more nonviolent future. The new meat tech could also greatly reduce land and water use, and prevent pandemics. Slaughter-free bacon is around the corner Winston Churchill made a prediction in 1931. Cultivated meat enthusiasts love to cite his words: We shall escape the absurdity of growing a whole chicken in order to eat the breast or wing, by growing these parts separately under a suitable medium. 80 years and many sci-fi depictions later, meat cultivation was finally taking off in 2010. Jason Matheny and Isha Datar each published influential papers that year. Then in 2013, Mark Post debuted the world’s first cultivated hamburger at a conference. By May 2020, New Scientist estimated there are some 60 startups working on this endeavor internationally. And recently in August, 50–100 lucky applicants were chosen to sample the cultivated version of bacon! These taste tests were done by Mission Barns, based in San Francisco, Mission Barns describes how they do bacon on their website: Our process begins by isolating cells from an animal and placing them in a warm cultivator which mimics the animal’s body. The cells grow naturally as they do in a cow, chicken, or pig as we feed them nutrients including vitamins, sugars, and proteins. After the cells fatten the cultivation process is complete. We then harvest the meat, throw it on a pan, and enjoy. Mark Post’s groundbreaking 2013 hamburger did use cells from a slaughtered animal. Yet he since cofounded Mosa Meat and the team gets their cells from small biopsies, where the healthy live animal is given anesthesia. Memphis Meats, SuperMeat, Finless Foods, and JUST are just a few of the other cultivated meat companies. Their works in progress span many kinds of meat, including seafood. Two hubs for the industry are Israel, and the San Francisco Bay Area, where I once had the honor of dog-sitting for a meat cultivation scientist. The new meat tech could prevent pandemics and feed your kitty “Cellular agriculturalists” are the scientists making my wistful pangs for Evergold more than just a fantasy. My vision is a world where cultured meat is on shelves, and decreases the demand for meat from animals. This would have a net positive effect on the planet. -Isha Datar But besides the obvious perk of sparing Wilbur the pig, what other benefits does meat cultivation offer? The nutrition of real meat, made even better? Plant-based marvels, like the Beyond Breakfast Sausages and Impossible Burger that entered stores this year, taste almost identical to our familiar meaty favorites. However, some people wish for a slaughter-free option that’s nutritionally identical to animal flesh. Cultivated meat is potentially even better, because scientists can adjust the nutrition. For example, Cubiq Foods has cultivated a poultry fat that’s high in EPA and DHA omega-3s. What is it about red meat that makes it linked with higher rates of colorectal cancer, heart disease, and type 2 diabetes? Perhaps meat cultivators and nutritional scientists of the future can collaborate on safer red meats. Avoid contamination and pandemics Cultivated meat reduces the need to handle huge herds of animals. This is a 3-way win for food safety. First, making meat in a sterile setting can prevent foodborne illness and contamination. Second, about 73% of antibiotic use is for meat. This tripled since 2000 and has added to the growth of antibiotic-resistant bacteria. The CDC states 2.8 million U.S. Americans get AR diseases annually. Growing meat from cells instead of billions of livestock would protect the efficacy of antibiotics. Third, no longer breeding vast herds of animals will reduce the risk of pandemics. According to a 2013 United Nations report, 70% of new human diseases from recent decades had come from animals. Feed carnivorous pets without killing other animals There are at least two companies now cultivating pet food. Because Animals and Bond Pet Foods have established themselves with plant-based treats and supplements; meanwhile, they’re mastering cultivated products and preparing to introduce them to the market. “Be the first to buy cultured meat pet food,” beckons Because Animals’ homepage. Imagine a world where both the staunchest human carnivores, and creatures like our cats, can be fed animal-based meat without slaughtering other innocent beings. Will cultivated meat help with the climate crisis? Environmental benefits are perhaps the most touted hope of cultivated meat. Hanna Tuomisto at Oxford headed an impact assessment in 2011 to estimate the potential. Depending on the type of meat product compared, cultivated meat involved: 78–96% fewer greenhouse gas emissions 99% less land 82–96% less water 7–45% less energy (except for poultry, which had lower energy use) This report sparked further studies by Tuomisto and others that were more inconclusive about climate. In 2019, Oxford projected 1,000-year climate impacts. They compared 4 different cultivated meat scenarios and 3 conventional beef scenarios. The study suggested cultivated meat would be vastly better for at least the next 100 years, as you can see on the graph. 3 of the 4 cell-based scenarios outperform cattle-rearing impacts for at least 800 years. However, given how the different emissions of each system behave, cultivated meat’s worst-case scenario would be worse than the cattle systems after about 150–400 years. The Good Food Institute, a nonprofit leading alternative meat, gave a response to the 2019 study. They suggested that since cell-based meat uses 99% less land, the extra land could be used for clean energy production and carbon sequestration, thus increasing the difference for climate. As the new tech improves, they believe its advantages over traditional meat will increase. 14 scientists were awarded $3 million last year for alternative meat research. At this point, cultivated meat looks very promising for land and water use. We’ll see if upcoming analyses clarify the energy and climate factors. What are some challenges facing cultivated meat tech? One hilarious challenge of cultivated meat? What to call it. There are at least 5 terms in common use that all begin with C: cultivated meat cultured meat cell-cultured meat cell-based meat clean meat Cultured meat seems the most common, being the title of Wikipedia’s page on the subject. Cell-based meat is nice because it neatly contrasts with “plant-based meat,” the preferred term for brands like Beyond Meat. The Good Food Institute published research in September 2019 that advocated for cultivated meat. This term appeared to land the best with collaborators and consumers. Speaking of consumer acceptance, GFI’s 2018 survey showed 66% of US Americans were willing to try cultivated meat. Survey results have varied depending on how the concept was framed. Economy The most obvious obstacle of cultivated meat is economy. Quartz reported the following timeline of cost reduction: 2013: $1,200,000 per pound. (Mark Post’s original beef patty.) 2017: $9,000 per pound. (Memphis Meat’s chicken.) 2018: $1,000 per pound. (Memphis Meats again.) 2019: $100 per pound. (Aleph Farms’ beef patty.) Now that the technology has scaled up, it may be a matter of time before it gets competitive with conventional meat. Safety and consumer acceptance Two of the elements that lower cell-based meat’s cost also raise eyebrows. Growth factors are used. This parallels the use of hormones for livestock. In addition to hormone-free, many people want non-GMO. But cells are genetically altered so they grow indefinitely. This is called immortalization. IntegriCulture’s CulNet System offers a solution. They better mimic the whole body of an animal in which cells grow, making the process more economical without needing growth factor. Whatever production methods win out among the various startups, regulation is underway. In March 2019, the USDA and FDA announced they would oversee cultivated meat in the United States. It’s been tremendously exciting to watch this new meat tech take off, and to think where it will be in 5–10 years… How to pet and eat a chicken at the same time Cultivated meat could be a dream for: animals who don’t want to be slaughtered people concerned about them, but hooked on meat future generations we hope will live on a sustainable planet Time and research will tell. To give us a sneak peek, JUST, Inc. filmed an “out-of-body” taste-test of their chicken nuggets. The nuggets were grown from a single feather shed by Ian. Ian the rooster. As people sat outside eating, there was Ian right beside them. Alive and well, waddling around the picnic table. Surreal. That’s how you eat an alive-and-well chicken and pet them at the same time. ❤
https://medium.com/creatures/how-science-is-making-it-possible-to-eat-and-pet-a-chicken-at-the-same-time-61927fb8fad1
['Phoenix Huber']
2020-12-15 23:56:25.345000+00:00
['Animals', 'Food', 'Creative', 'Technology', 'Climate Change']
1,004
Software Outsourcing Process And Models for Successful Project Completion
Software Outsourcing Process And Models for Successful Project Completion Software development outsourcing offers many positive outcomes and helps you save money and time. Let’s see the models and processes for outsourcing your software development project before completion. Vijay Khatri Follow Dec 13, 2021 · 6 min read The software outsourcing process commences when a business is facing certain limitations and decides to employ a third party to carry out the work on its behalf. Outsourcing software development projects to a trusted development partner enables you to get acquainted with top-notch web, mobile, and other software products. Not only that, outsourcing offers several benefits and other positives that enterprises and individual software developers can experience. So, if employing trusted third-party software developers resonates with you, here’s a step by step procedure for outsourcing software development projects: Software Development Outsourcing Process The following guide helps you navigate through your outsourcing journey for hiring a reliable outsourcing partner. 1. Defining Your Expectations and Goals Without clear and concise goals and expectations, the outsourcing team would find it difficult to comprehend the software development project. Thus, the first step before you do anything is to prepare a detailed project outline, what features to include, technology to use, and a timeline for completion. Include your internal team or even the people you know from your business relations if possible for a brainstorming session. This would help you ensure your software development projects sound interesting and achievable. 2. Devise Scope of Work Preparing the scope of work is the next step in the process of outsourcing software development. It does not make sense to reach out to a software outsourcing partner without any documentation that clearly outlines the objective and scope of work. Devising such documents would require things such as product specification documents, plans, budget estimates, system admin documents, and other reports. These are just the tip of the iceberg, there’s a whole bunch of things needed depending on the type of the project. 3. Research & Find Reliable Software Development Partner Finding a trusted and reliable software outsourcing partner is a tough nut to crack. The right software development agency is decided based on the project requirements, goals, and scope of work. The first thing you should do is get references from your team, colleagues, business partners, or just google it. Also, there are various company comparisons and review sites such as clutch to find a reliable outsourcing company. Shortlist some of the agencies and filter out to find the best and reliable one to do the job. To shortlist companies, you’d have to do a bit of research first, which can be a tedious task but have huge implications on your software development project. There are certain ways to research and find the best outsourcing agencies such as checking out their portfolio, reviews/testimonials from past clients, years of experience, etc. 4. Contacting the Outsourcing Partner The first interaction with the agency of your choice is going to be crucial. You can approach more than one agency with similar interests and conduct an interview with them to get an idea if they are experts in their field or not. Schedule a video interview or a phone call to establish a personal relationship. Here are a few things you should discuss while on a call with the outsourcing agency: Ask about their process, past experience, and technical skills for completing the project. Set out your goals and expectations from the start. Discuss about their prior projects, what problems they had, and how they resolved them. Find out the team size and who’ll work on your project, who will be the point of contact, etc. Discuss the budget, time scale, and other important aspects of the project. Invite the outsourcing partners of your choice and get to know them for better understanding and relationship. Models for Software Outsourcing Project Most sophisticated software development agencies have their own standards which they follow for efficient work. Knowing such standards and models of work during the interview process would be imperative. Also, it’s a subtle way to let others know and look more professional while explaining the outsourcing models. Here are the software outsourcing models mostly in use: 1. Staff Augmentation Model One of the simplest models of outsourcing, the staff augmentation model is where the software development tasks are carried out by the outsourcing team. In simple terms, it’s like leasing a team of experts from other agencies, be it onshore or offshore, and providing them tasks to work upon and complete the project. It sounds similar to an offshore development center where you set up a whole team of experts to carry out your software development project. Also known as team augmentation, this outsourcing model enables your in-house team to maximize development efficiency while retaining control over the project. Aspects like defining the work process, managing the project to work on, etc. are controlled by the client. 2. Dedicated/Managed Team Model A dedicated or managed team outsourcing model is where you outsource your software development project to a dedicated team of experts. Here. you’ll get a team of developers to work and complete certain tasks and project delivery pipeline. Also, you’ll have direct access to the team leaders and project managers that take care of the software delivery schedule to ensure the project stays on track. The client is still in control of making software development decisions and can control projects individually. However, they can pass on a great deal of decision-making to the outsourcing providers in cases where a software product is required to be maintained. 3. Project-Based Outsourcing Model In the project-based outsourcing model, your only concern is the result which is the software product and not the means behind it. Here, your development partner looks after and manages the entire software development process as per the specifications and requirements provided prior to project commencement. The client hands off the requirements to the outsourcing partner, who is responsible for developing and delivering the final product. Although the client has the least amount of control, they still can have some oversight to ensure that the product quality remains intact. Software Outsourcing for Successful Project Completion Software outsourcing models help you get started with your journey to software development. Once you have established who’ll have more control over project decisions, you must tick off the following things for successful completion. Straighten out who’s gonna provide the technical support when and if there arise any out-of-scope issues. Know yours as well as the outsourcing provider’s limitations of doing certain things. And since you have mutually understood who’s in control, distribute and assign tasks to particular team members. Figure out what they are best at and what needs improvement. The last thing is to have some faith and trust in the outsourcing team of your choice. You don’t always have to lean on their shoulders considering they know what they are doing and that’s why they have been recruited. Final Words Outsourcing software development has countless positive outcomes for your business. It saves money and time and also enables you to get high-quality solutions from a team of experts. Now the only thing between you and an outsourcing software project is choosing the model that suits your requirements best. Also, through the steps listed in the outsourcing process above, you must first define your goals, requirements, the scope of work, etc. before finding and reaching out to a reliable outsourcing partner. Speaking of the same, Ashutec Solutions Pvt Ltd. is a reliable outsourcing services provider trusted by many small to large enterprises. Our experienced and adept team of professionals is ready to serve you and offer you unique, scalable, and maintainable software and product development solutions. Contact us today or write to us at [email protected] for more discussion on the topic. Also, follow ashutec for reading more such articles.
https://blogs.ashutec.com/software-outsourcing-process-and-models-for-successful-project-completion-72b5248b553e
['Vijay Khatri']
2021-12-13 05:14:20.526000+00:00
['Technology', 'Software Development', 'Outsourcing Services', 'Outsourcing', 'Outsourcing Company India']
1,005
EP 45: Joseph Tsai คู่หูแข้งทอง Jack Ma
in The New York Times
https://medium.com/@terrynut/ep-45-joseph-tsai-%E0%B8%84%E0%B8%B9%E0%B9%88%E0%B8%AB%E0%B8%B9%E0%B9%81%E0%B8%82%E0%B9%89%E0%B8%87%E0%B8%97%E0%B8%AD%E0%B8%87-jack-ma-7e7ac747bd2d
['Nut P']
2020-12-26 12:02:10.023000+00:00
['Technology', 'Biography', 'NBA', 'Ecommerce', 'Business']
1,006
How To Raise Funds Without Giving Away Equity
“How to Raise Money without giving away Equity” is the question every company wants an answer to today. Every company dreads the day when it has to raise funds. Every time a company raises funds the owners have to part with a portion of their equity against the money they raise. The funds always come in with a set of conditions and terms. Most of which involve reducing the founders stake or sometimes putting him/her in tough situations. The most common ways funds are raised is via VC funding. Billions of Dollars every year are invested in companies via VC funding. These capitalists search for the best investments and the best team whom they trust can successfully run a company conditional their business model is right. When this trust is weak and the VC’s aren’t confident they will lure the founders towards Venture Debt. A kind of loan that the founders get against a portion of equity with conditions to repay it on time. After a company has grown this stage they can definitely raise massive amounts through an IPO (Initial Public Offering) that makes the company public. The common man can invest in the company buy purchasing shares from the exchanges where the company gets listed. So how to raise money without giving away equity?. These days there is a new medium. The developers of the world have devised a new strategy to raise funds. Using the blockchain ecosystem a token based system has evolved that help companies crowdsource funds through an ICO (Initial Coin Offering). What is an ICO? Startups that do Initial Coin Offering basically raise funds via a crowdsale by accepting Cryptocurrencies in exchange of Tokens. These tokens are like shares, but they aren’t shares. These tokens get listed on exchanges and individuals purchase them with a possibility of their prices going up when they get listed on exchanges around the world. The last year saw startups raise close to $5.6Billion through ICOs. This is the reason for all the hype probably! How Do ICO’s work? All ICOs start with a basic idea. A startup come up with an idea for a blockchain related project and proposes it to the blockchain community through different social media channels to see what kind of traction it can generate on the idea. If the startup thinks they have the communities acceptance they will draft a basic white paper. This white paper will consist of all intricate details of the project. The fund allocation, the need for the token and the team that will work on the project with their credentials. In the following process the token economics is finalized , where the details on the number of tokens, the price of the tokens and other similar economic details are finalized. In the next step the ICO team works towards creating the required buzz for the tokens with the right marketing campaigns within the community. These campaigns last for a limited period of time as every ICO has a fixed time frame and a detailed process to sell these tokens. Once the investors receive these tokens, they purchased through the ICO, these tokens are made to go live on exchanges for trading. This is where the initial investors usually make a decent return on their investments. This is exactly how an ICO functions and “How to raise money without giving up your Equity”. Obviously summarized, this article isn’t a detail rendition of the process but a lot of work goes into doing an actual ICO. Indian ICO’s. In the last year a couple of Indian companies have also executed successful ICOs and raised some money. But globally a lot of them have successfully raised capital, much more than what they asked for. Some of them who have launched or are looking to launch their own ICO are Drivezy Belfrics WandX SpringRole Machaao EasterEgg Cashaa How to start your ICO? To start your ICO the most important thing is to build a dedicated and a strong team. Post which below are the basics of an ICO Develop a Token Plan Design a Token Sale and Infrastructure Plan Create a lifecycle Announcement Offer Marketing and PR Start Sale Content Plan and White Paper Public Guides Token Sale Plan Community Building Bounty and Referral Plan Well, these are the basic steps for your initial understanding. To know details, write to me at [email protected] How to Avoid ICO Scams as an investor? With major success stories also comes a risk. Where there have been multiple successful ICO’s there have been a few ICO’s who ran away with the investors money. There are certain tips on how to avoid them: Check the team behind the ICO Usually when the ICO doesn’t have a team who can pull it off, the ICO is guaranteed to fail. Like how the VCs need a basic background of the team it is advised ICO investors also follow this process. Read the White Paper Every ICO releases a white paper today. Go through the entire white paper for all the important details. Do not fall for fancy words and jargons. Make sure you understand the clear reson for the token sale and utility of these tokens in the future. Avoid ICOs who fail to answer these questions. Relate with the community All active projects have communities on Telegram, Reddit and Slack. Speak to people on these communities and notice how active the community is. The active the better, since that shows the commitment of the developers and the people towards the project.
https://medium.com/blockinventors/how-to-raise-funds-without-giving-away-equity-c3269ce3688d
['Akshay Gokalgandhi']
2018-04-19 10:06:01.304000+00:00
['Equity Crowdfunding', 'Fundraising', 'ICO', 'Blockchain', 'Blockchain Technology']
1,007
Professional Corda Development Services
PerfectionGeeks Technologies is one of the best names in the business when it comes to providing top quality Corda development services. We are highly acclaimed in the business when it comes to developing interoperable Corda applications. Our developers are always ready to help you with applications that are compatible with Corda and Corda Enterprise. You can trust our team of specialists to help you with the best of development services and that too according to your business needs. We have the most professional team of developers to assist you and make your experience productive Experienced Corda Developers At Your Service Our team of experienced Corda developers will implement the core components while working on the respective application which includes: states, contracts, flows. This will always help you have a smooth and efficient development service according to your specific needs. We will match up with your business needs by assessing Corda’s direct transactions, interoperable blockchain networks, and privacy considerations. Our team has the required experience and knowledge to deliver as per your given specifications. So, you must not hesitate and reach out to our professional developers to have your Corda applications developed without any problem at all. Covering All Range Of Corda Development Service Here at PerfectionGeeks Technologies, we highly appreciate when it comes to providing an extensive spectrum of Corda development services that will help you automate your business procedures. We also help you with the facilitation of trusted transactions. Below mentioned are the wide range of Corda development services provided by our team, take a look: Corda Consulting And Advisory We have the best in-house development team to help you with best-suited Corda advisory solutions to assist you to grab maximum benefits with the help of the Corda platform and that too at very nominal pricing. POC Development We have the most qualified team to help you with proof of concept development services which will give you ideas a reality check as well. So, turn up to our team of developers and let them help you with your needs of decentralized applications. Smart Contracts Development You can always get connected with our Corda development team and get a digital platform to ake your smart legal contracts which are complete safe and protected from all negative aspects. Time to bid adieu the conventional contracts. Legacy App Modernization PerfectionGeeks Technologies is highly acclaimed in the business when it comes to implementing the best methodologies which will help you get the existing applications transferred to distributed ledger technology. Reach out to our team now for quality service. So, do not hesitate and get connected to our team of Corda development service provider and have your needs covered in a very precise way! Consult now!
https://medium.com/@perfectiongeeks/professional-corda-development-services-dc71a60a8abd
['Perfectiongeeks Technologies']
2020-03-12 10:18:41.286000+00:00
['Ethereum Blockchain', 'Blockchain Technology', 'Web Development', 'Web Design', 'Android App Development']
1,008
10 IMPORTANT USES OF BLOCKCHAIN TECHNOLOGY IN BANKING INDUSTRY
Blockchain technology has constructed hype from the day of its invention and it has gained a lot more attention in the last few years. Because of its transparency and flexible nature, blockchain is being adopted in many industries for its growth and development. In recent days, the major issues which every banking industry faced for data protection. Blockchain plays a vital role here by increasing trust, transparency, and privacy in data sharing. Nowadays, all those banking industry technologies are challenged by protecting security. By enhancing trust, transparency, and privacy in data sharing, blockchain technology may aid in the acceptance of digital protection solutions. Blockchain technology completes its major years of running and still running successfully. This technology starts primarily with cryptocurrency bitcoin. Its big role is to provide a decentralized exchange of records. Blockchain technology has now expanded from finance to include the blockchain sector as well. Here the most popular 10 important uses of blockchain in the banking sector: Payments, Especially Cross-Border Payments Stock Exchange and Share Trading Trade Finance Digital Identity Verification Syndicated Lending Accounting, Bookkeeping, and Audit Credit Reports for Businesses and Individuals Hedge Funds Crowdfunding (ICOs) Peer to Peer (P2P) Transfers Here we explain about each benefit: Payments, Especially Cross-Border Payments Payments are the first and premier benefits of any banking and financial system. When it comes to blockchain finance, both central and noncentral banks all over the world are now tapping into this new technology in terms of payment processing and potential issuing of their own digital money. This trend also holds the cross-border payments, which have been powered mostly by Swift or Western Union until now. Stock Exchange and Share Trading: As you know, the traditional stock exchange processing time is very high and involves lots of stages and administration and can take up to 3 days. anyhow, the decentralized nature of blockchain technology in banking can remove all those unnecessary middlemen and enable trading to be run on computers all over the world. Never devoted servers unified into an interconnected network. Trade Finance Blockchain also plays a major role in the trade finance sector — financial ventures that are related to c international trade (not stock exchange trading). Even in the present day also the disruptive world of technology, many trade finance ventures still involve lots of paperwork, such as bills of lading, invoices, letters of credit, etc. Of course, mostly the order management system permits you to convey out all this paperwork online, but still, it consumes lots of time. Digital Identity Verification Digital financial transactions are impossible without identity verification. anyhow, this verification requires a lot of steps to be taken, such as: Face-to-face verified. Authentication: The bank users need to prove their identity when they login into the service. Authorization: A proof of the user’s aim is needed. every step needs to be taken for each new service contributor. Anyhow, blockchain makes it possible to securely re-modify identity verification for other services. Syndicated Lending Syndicated lending refers to giving loans to individuals by a group of lenders, typically banks (a syndicate). Due to several contributors involved, the traditional processing of such syndicated loans by banks can take up to 3 weeks. Bank has some following challenges of syndicate loans. Know Your Customer (KYC) — user identity verification. Bank Secrecy Act (BSA) and Anti-Money Laundering (AML) — legal actions aimed at prevention, detecting, and reporting of currency laundering ventures. Blockchain financial services can superpower this process and make it more transparent. With blockchain’s ledger, banks within a syndicate can distribute tasks related to local compliance, KYC or BSA/AML and link them to a single client block. Accounting, Bookkeeping, and Audit Probably that involves as much paperwork as accounting the traditional way, and it is digitalized relatively slowly. The reason behind that may be in strict regulatory requirements regarding data validity and honesty. Therefore, accounting is another domain that can be modified with the power of blockchain technology finance, from clarifying compliance to streamlining the traditional double-entry bookkeeping. As a result, the records are more transparent and any attempt at forging is almost impractical. Think of it as of an “electronic notary” verifying the transactions. In addition, blockchain’s smart contracts can be used to automatically pay tangled. Credit Reports for Businesses and Individuals Blockchain finance can also help separate and small businesses to quickly get loans based on their credit bygone days. It may take a long time for lenders to review the borrower’s loan bygone days. Traditional business credit reports provided by middlemen credit service are not available for small business owners. Beyond, paying companies to entrance their sensitive data sounds strange and insecure. Anyhow, blockchain can provide tools that will allow borrowers to make their credit reports more accurate, transparent, and securely shareable funds. Hedge Funds A hedge fund is investing cooperation consisting of a fund manager and a group of investors (limited partners). Anyhow, hedge fund contributors are traders rather than ordinary investors. The purpose of a hedge fund is to maximize investor returns and decrease risks. Crowdfunding (ICOs) Crowdfunding requires raising funds by asking a large number of people each for a small amount of money, typically digitized. This industry is a perfect suit for blockchain technology finance. Initial Coin Offerings (ICOs), financial instruments that help to start young cryptocurrencies, are the most known example of blockchain-based crowdfunding. ICO tokens are the same as shares of an organization, though usually without fairness exchange. Instead, the investors buy tokens either for existing cryptocurrency, such as bitcoins or for physical currency, such as US dollars. Later, in case of success, they can sell these tokens on cryptocurrency markets. Like in crowdfunding, funds are raised to implement a concept at the stage when the company has no product. Peer to Peer (P2P) Transfers Peer to peer convey, clients can transfer funds from their bank account or credit card to another person’s account via the Internet or mobile phone. The market is full of P2P transfer applications, but all of them have some limitations. The mention before issues can be solved with blockchain-based, decentralized apps for P2P transfers. The above I mentioned all benefits of blockchain in the banking industry. And also explain about it. If you need blockchain in your banking industry.
https://medium.com/@brugusoftwaresolutions/10-important-uses-of-blockchain-technology-in-banking-industry-4bb8d601d18e
['Brugu Software Solutions']
2020-11-18 13:23:01.600000+00:00
['Blockchain', 'Blockchain Technology', 'Blockchain In Banking', 'Blockchain Startup', 'Blockchain Development']
1,009
Apple Once Passed on Acquiring Tesla
When notes leaked that Apple was planning to begin production of an electric “iCar” by 2025, Elon Musk responded. At first, he questioned Apple’s tactics for producing safe and efficient batteries and whether they actually gave the company a competitive advantage over Tesla. Then, he dropped this bomb: In 2017, perhaps the time period in which Musk is referring, Tesla wasn’t far from death. On “Axios on HBO,” Musk said his company was “within single-digit weeks” of folding as they struggled to ramp-up Model 3 mass production. Musk also said he “personally redesigned the whole battery pack production line and ran it for three weeks,” trying to paint the picture of how dire the company’s situation was. Musk said the amount of work he put in to save the company hurt his brain and his heart and that no one should put in the amount of work he did. These reflections explain why Musk reached out to Apple. The company was bleeding money, as Musk said. While Tesla probably wouldn’t have actually died in 2017, it would have had to get money from somewhere. An acquisition from a large-cap company like Apple would have taken all that stress off of Musk. He would have the financial backing that could keep Tesla afloat. In a separate interview in 2018, Musk said ramping up the Model 3 production was a “bet-the-company” situation — a situation he didn’t see Tesla being in again. Two years later, Tesla is worth over $612 billion, making it the most valuable car company in the world — more valuable than the next six car companies combined, mainly due to its stock market returns. Musk said Apple would have acquired Tesla at 1/10 of its current valuation, meaning around $60 billion. Hindsight is 20/20, but that is obviously a huge miss for Apple. Apple CEO Tim Cook passing on the meeting doesn’t make much sense. Tesla, a car manufacturer, is obviously in a much different space than Apple is with its technology. But part of what makes Tesla so unique is that it is, too, a technology company. And, according to the Reuters “iCar” report, the two companies seem to now have similar goals. The report states that Apple’s new car plans, referred to as “Project Titan,” have been on-and-off since 2014 when the company originally planned to produce a vehicle. Apple eventually stepped away from the plan but reignited it in 2018 by re-hiring veteran employee Doug Field, who returned after working at Tesla, to head the project. After more than a year of building a team behind the scenes, Apple now feels comfortable moving forward with making a consumer product — an electric, self-driving vehicle. There is doubt within the report, however, that Apple could see a profit on vehicle production in any reasonable timeframe. It took Tesla 17 years to become profitable. As discussed earlier with Tesla, mass-producing these types of vehicles can become a money pit. Apple will supposedly not begin production until 2025, giving them plenty of time to find suppliers of technology and manufacturers for the car. By that time, though, every other company working to produce electric and autonomous vehicles will be miles and miles ahead of Apple. “As we see with Tesla and the legacy auto companies, having a very complex manufacturing network around the globe doesn’t happen overnight,” Trip Miller, managing partner at Gullane Capital Partners, said in the report. In 2020, Musk became the second-richest person in the world thanks to Tesla’s extreme success in the stock market — returning nearly 700% in the past year. Had the company been sold to Apple three years ago, Musk would not be in the position he is in now and Tesla might not be either. Apple is doing just fine with a valuation of over $2.2 trillion. Moving forward, though, it has to try to find a way to compete with the tech-auto giant that it could have had a stake in.
https://medium.com/swlh/apple-once-passed-on-acquiring-tesla-d78b2f58d389
['Dylan Hughes']
2020-12-24 22:33:20.102000+00:00
['Transportation', 'Technology', 'Apple', 'Sustainability', 'Tesla']
1,010
Why Beautification Filters are the Last Stand Before the Explosion of Body Modification Augmented…
How you look is controlled by your genetics, but what if it wasn’t? Each of us is constrained by the limitations of our species in so far as the look we are capable of projecting to the outside world. Sure, you can pierce and transform aspects of your physical body but our selves as a canvas is limited by the amount of change we can make before it‘ s fatal or too harmful. The extent to which this is true has been tested by almost every culture. Tribal copper neck rings, lip plates, ear stretchers, scarification are historical and modern examples of some of the things that have been done. Whether to signify religious ceremony, increase beauty or help you display who you are to the outside world each involves alteration of who we are physical. We are about to embark upon body modification to an extent that was hitherto unimaginable. Technology will enable us to manipulate our appearance without limits or the risk of damage, more expression with less harm sounds like a significant leap forward. BodyMod AR is the next revolution in self-expression Replace your hair with a blazing mane of fire Insert swimming goldfish in place of your pupils Watch tattoos shift vividly across your skin The future is virtual. Augmented reality will enable personal expression in the form of modification limited only by your imagination. Consumption will be through permanent Augmented Reality units implanted upon our retinas or contact lenses which enable us to participate in this new world. Instagram may have enabled us to post pictures of our most beautiful selves but the next phase of technological innovation will enable us to alter not just how we look, but how we are perceived permanently by everyone around us. Of course, people will have the ability to switch off, casting aside the wondrous reality we have created for ourselves, and consume our meat bodies as genetics intended. Other will chose to remain plugged into this alternative reality indefinitely. On the surface, the world may appear as it has always been, while AR will display the depth of your imagination on your person. Why must we accept the limitations that our genetics impose — why can’t we be green or blue? Why must we have freckles and apply makeup, what if we could virtually fix our complexion using a beautification filter, safe in the knowledge that the ‘US’ other will consume will always be our best selves? Who needs hair when we know it’s always perfectly styled in AR and can be altered at the touch of a screen? Want a new color? just as simple. Who needs new clothes when our perfect attire can be layered on top of the bland garbage we are actually wearing? Who needs to decide on one permanent static tattoo when we can virtually design whatever dynamic display we want and alter it at will? Future technology will allow each of us to be exactly who we have always wanted to be in the eyes of those who see us. We will never have to worry again about whether it’s a bad hair day, your outfit wasn’t clean or you no longer love the tattoo you picked while on holiday with your friends at 18. Our bodies will become a platform to project our thoughts and feeling visually and our capabilities physically will no longer limit what’s possible. What started in the fields with our ancestors who applied products to their faces to achieve a more beautiful look, evolved to hair products, and was transformed with technology will be revolutionised by the next wave of possibilities. Would you like to wear that SnapChat filter permanently? How about the colour of your eyes when you instagramise your life? That’s just the start of what possible when we endow ourselves with the ability to modify the reality we inhabit and others perceive
https://chrisherd.medium.com/why-beautification-filters-are-the-last-stand-before-the-explosion-of-body-modification-augmented-28e2f355dd2c
['Chris Herd']
2018-11-07 09:45:27.406000+00:00
['Technology', 'Future', 'Beauty', 'Makeup', 'Augmented Reality']
1,011
How to Improve Productivity in Customer Support With A.I.
Figure 1: Sources of customer support queries Companies receive support inquiries from various channels. This may include emails, support tickets, tweets, chat conversations with customer support representatives (CSRs), chatbot conversations and more. This is a lot of data that you are dealing with and it’s mostly unstructured and scattered in nature, making it that much harder to manage. The question becomes, how can you leverage all this text data to improve speed in responding to customer support inquiries or reducing the number of tickets that are being opened? This can partly be done with automation using Natural Language Processing (NLP) and Artificial Intelligence in general. This article will give you an overview of 5 areas in the customer service workflow that can benefit from A.I. / NLP based automation. This is not an exhaustive list, but a list that applies to most customer support teams at medium to large organizations. #1 Recommend best answers When CSRs try to respond to a customer problem, they can be overwhelmed with identifying the best answer from the pool of possible answers. What they need is one answer that will address the customer problem. Some companies maintain an exhaustive list of problems and corresponding answers which the CSR has to search through, sometimes even manually. This can be painfully slow and draining if you have to perform a search for each and every question. NLP can be really useful here by recommending the best answers given a support inquiry. It becomes even better when the answers have an associated “score” that indicates the likelihood of a particular answer solving the customer problem. Using this approach, instead of explicitly performing a search, the CSR is now having information pushed to them automatically, preventing a break in their workflow. With this, the response time can be much faster, which also means, CSRs will be able to handle a larger volume of support issues. What’s even nicer is that the CSRs will not be overwhelmed by the end of the day due to the number of searches they have to perform. #2 Suggest historical threads While some support questions can be easily answered with the recommended best answers, others can be complex, requiring extended research. One way for CSRs to solve complex issues is by looking into related historical threads (which have been successfully resolved), and understanding how those issues were resolved. With this, the CSR will be able to better resolve the issue at hand or form a more complete answer to the support question. With NLP and Text Mining technologies we can automate this process by recommending related historical threads for any given support inquiry. This saves CSRs from having to conduct various searches, contact peers and their manager for help on an issue. The benefit of surfacing related historical threads is (a) the potential decrease in response time and (b) reducing follow-up support questions as complex issues are resolved in fewer interactions. #3 Group questions to limit context switching As we all know, context switching can be hard. Going from resolving issues related to signups to billing and then back to signups can be a productivity killer. According to Bud Roth, author of Be More Productive: Slow Down, distributing your energy over a wide variety of tasks can dilute your effectiveness the same way interruptions do. By grouping similar support questions, CSRs can address similar problems in chunks, where the knowledge bank that they’ll have to tap into and the pool of potential answers are related. Figure 2: Grouped customer support questions With the use of clustering techniques in Natural Language Processing and Text Mining, we can automatically group similar questions as shown in the example in Figure 2. Notice that the first group of questions is all about adding a profile picture. The benefit of doing this is that it maintains the same train of thought in resolving issues. In some cases, the solutions may be identical, while in others the CSRs will know what steps to take to resolve an issue while everything is still fresh in memory. By limiting context switching you can expect to see a reduction in response times. #4 Auto-route Questions Based on Expertise Support questions come in all shapes and form and customers may express the very same question, quite differently. For example, the questions in figure 3, are all related to adding a picture to the customer’s profile, with a slight difference in how it’s structured. Figure 3: Similar questions, different expression By classifying each incoming question to a predefined set of categories with text classification methods (e.g. profile, picture, attachment), you can use these categories to route the questions to agents who are best at handling those topics. Some CSRs may be highly qualified at handling certain topics more than others. By intelligently routing questions to relevant expertise, you’ll be increasing the productivity of CSRs as they are not spending time learning how to address support issues that are out of their wheelhouse. #5 Auto-prioritize support threads A few companies that I’ve worked with have mentioned addressing support threads in the First in First Out order (FIFO), meaning the oldest support threads get addressed first. Other companies, manually assign priority based on the severity level of the issue. Don’t forget, that not all customers are equal and not all problems deserve the same level of attention. By addressing threads in the FIFO order or assigning priority based on severity alone, you are missing out on the opportunity to retain your highest-valued customers. If you are spending your time solving 100 low priority problems for low-value customers before serving your most valuable ones — it’s time to think about making changes. While you can start serving your VIP customers first, with A.I. based automation, you can combine various factors into prioritization. For example, you can develop a model that combines the priority of historical support threads with the value that each customer brings (e.g. customer lifetime value) to assign priority to new support questions. This will ensure that your highest value customers with high priority issues get served promptly and by your best CSRs. How to Get Started? If you are a leader in customer support or a product manager, you may be wondering where and how to get started? I recommend that you start by listing down your most inefficient processes. What’s taking you the most time? Is it that the search for answers or question routing has gone bad? Once you know what’s hurting, the next step is to determine if the inefficiency can benefit from Machine Learning, Text Mining and NLP automation. Trying to solve all problems at once will set you up for failure as there’ll be too many changes in your workflow. In addition, some of the automation can be at odds with each other. For example, by auto-prioritizing threads, grouping questions may or may not be effective. Optimizing all at once also prevents your ability to measure success. You’ll not know if the reduction in response times was due to the recommendation of the best answers or if it was due to the auto-prioritization of threads. Despite the hype, not all problems benefit from A.I. and can be resolved with other standard solutions or simple changes to how your software works. In cases of uncertainty, you can always get in touch with me for a recommendation.
https://medium.com/swlh/how-to-improve-productivity-in-customer-support-with-a-i-2ce19eaf77aa
['Kavita Ganesan']
2020-04-02 01:26:56.457000+00:00
['Artificial Intelligence', 'Technology', 'UX', 'Productivity', 'Business']
1,012
Announcing dLab’s First Investments
Meet the Four Companies Kicking Off the dLab/EMURGO Accelerator Program Today, we’re excited to announce the first four startups that we will be investing in and working with as part of the dLab/EMURGO program, a 14-week program designed to accelerate the development of these companies. It’s been an extremely competitive selection process, as we’ve received far more applications than expected for an inaugural program, and it’s truly been a pleasure to speak with so many passionate founders who are working day and night to decentralize the world’s data, empower individuals, tokenize financial instruments, and explore new use cases for blockchain technology. After hours of interviews and internal discussions, these four companies proved to us that they have what it takes to become great. Thank you to everyone who has applied! In addition to the four companies which are participating in the first dLab/EMURGO cohort, we’re also funding several Cardano fellowship projects. Fellows will work closely with our staff, selected startups and our partners to develop novel concepts for research, open source, education, and productization. We’ll be releasing details about these projects in the coming weeks on our blog, so keep an eye out! Both SOSV and EMURGO will be spending the next 14 weeks working hand in hand with these founders to accelerate their companies’ technology, business models, and positioning. In addition to investment, we’ll be providing them with staff support, network access, and opportunities to work closely with our large, diverse mentorship pool. Meet the Companies Catallact is a blockchain analytics engine for the finance industry, focusing on market intelligence and regulatory compliance. Built upon extensive academic research by founders that have significant trading experience, Catallact applies machine learning and data science techniques to provide automated and scalable insights into the dynamics of crypto-assets. Team: Paul Lewis and Dan McGinn Helixworks’ proprietary MoSS technology integrates DNA-based ID tags to physical goods. When combined with digital ledger technologies, DNA-IDs provide an effective mechanism to track food, medicine and other supply chain goods that is secure, robust (surviving washing, extensive damage, etc), and extremely difficult to falsify. Team: Nimesh Pinnamaneni and Sachin Chalapati Sempo helps aid organizations distribute relief funds directly to people in crises so that they can immediately buy what they need. They solve beneficiary enrollment, cash disbursement and program monitoring in one seamless platform. Team: Nick Williams and Tristan Cole Tesseract is standardizing and streamlining the way applications interact with blockchains to make it as convenient as interacting with the Internet. Their multi-network OpenWallet protocol, mobile-first wallet reference implementation, and supporting SDKs for mobile application development are their first step towards building the railway for decentralized internet. Team: Daniel Leping, Gilad Waksman, and Yehor Popovych About dLab dLab is a New York City-based accelerator which focuses on distributed ledger technologies including DLT protocols, blockchain infrastructure, decentralized applications, and distributed ledger technologies. dLab combines EMURGO’s deep expertise in commercial blockchain development and the robust strength of the Cardano ecosystem, with SOSV’s best-in-class acceleration processes, investment consortium, and ability to help companies bring innovative technologies to market rapidly. The program is designed to be protocol agnostic and multi-disciplinary, and accelerated startups may be focused on any variant of distributed ledger or blockchain technologies. EMURGO’s partnership means that startups building for the Cardano ecosystem receive several material advantages, including access to Cardano development and policy partners. About EMURGO EMURGO drives the adoption of Cardano and adds value to ADA holders by building, investing in, and advising projects or organizations that adopt Cardano’s decentralized blockchain ecosystem. EMURGO leverages its expertise in blockchain R&D as well as its global network of related blockchain and industry partners to support ventures globally. EMURGO is the official commercial and venture arm of the Cardano project, registered in Tokyo, Japan since June 2017 and in Singapore since May 2018. EMURGO is uniquely affiliated and works closely with IOHK to grow Cardano’s ecosystem globally and promote the adoption of the Cardano blockchain. About SOSV SOSV, the “Accelerator VC”, deploys more than $50m each year to the 150 startups and alumni that graduate their deep-tech accelerators in hardware (HAX), life sciences (IndieBio/RebelBio), cross-border internet (Chinaccelerator/MOX), and disruptive food (Food-X). Every year an additional $250m in follow-on capital is invested into SOSV startups by a broad investment network of over 200 VCs and corporates globally. SOSV startups have aggregate revenues of more than $1B and a combined market capitalization of over $10B.
https://medium.com/dlabvc/announcing-dlabs-first-investments-dcc59306e13e
['Paul Saint-Donat']
2019-02-14 23:11:19.336000+00:00
['Investing', 'Accelerator', 'Venture Capital', 'Technology', 'Blockchain']
1,013
An Outdated System: Education from the Perspective of a Student
Photo from https://www.dreamstime.com/illustration/education-collage.html We forget 40% of the content we learn in schools. Yet, we use only 37% of what we remembered in everyday life, according to a study done by H&R Block. There is a simple truth for kids in America. The Education system is not working. It was built for kids 100 years ago, in order to breed factory workers during the industrial revolution. Evidently we no longer live in that time period. Now we live in a modernized world with the most technologically stimulated generation the world has seen. A change requires an update and continuous update in order to teach kids in the fastest growing society. To inform kids about the biggest societal, environmental, and political problems we need to change our outdated school system and how we teach kids. Since I was in Pre-K I have loved school. I loved seeing my friends, collaborating with them, but I especially loved learning. I remember the feeling as I memorized my first multiplication table in the first grade. I spent my reading time studying it by myself in order to succeed. I presented it by memory in front of my classmates; facing my fear of public speaking and earning myself a candy bar in the process. That first candy bar fueled my hunger for learning. A hunger that has since then died out. Now I’m currently going through the High school education system. I can see that there are major foundational problems in the way that my peers and I are taught. Some of these problems arise from technology, but most are from the outdated ways we learn. It’s extremely difficult to maintain students’ attention when so many things are happening around us. Just look at phone usage. Which is the first thing we do when class is over, and most kids sneak it during class too. That’s because to us kids, our phones are more interesting than class. Especially with social media. However, the education system was built to teach kids who lived during the industrial revolution. Kids who had never seen a phone in their life. Therefore, we feel like we can’t relate to the content anymore. We lose interest. Likewise, there is a growing belief amongst my peers and I. We feel that most of the content we learn is useless after graduation. Consequently, to my generation, school has become more about passing than learning. This loss of interest coupled with a worldwide connection effectively breeds the largest academic dishonesty our school system has ever faced. A problem that without more resources, we are not ready to solve. Photo from https://www.apa.org/monitor/2020/04/cover-kids-screens Experts like Seth Godin and Ken Robinson agree that our school system was designed during the Industrial Age. With Industrial Age values of mass production and mass control that still run deep in the roots of our schools. Sophia Takrouri, shows how the rote learning style based off repetition and memorization was made to create prospective factory workers.“They were being taught to follow and remember basic commands, which was vital in a factory setting” (Takrouri). The fact that it is still the primary learning style in America schools shows how outdated our school system really is. More evidence of this is the how our entire day is run by bells, we have assigned spots, and we are only rewarded if we behave well. These are Industrial Age values that would dictate the success of workers in a factory. Not kids in a technologically advanced society. On average kids get a phone by age 10. Additionally, 25% of those kids have it by age 6, according to a survey done on 1,000 Americans by Panda Security. This proves to be an enormous problem that our schools simply can’t combat without strict regulations. Educational and Developmental psychologist, Kelly Allen, illustrates that phones can be a major distraction. “Kuznekoff and Titsworth found that students who did not use smartphones while participating in a lecture wrote 62 per cent more information in their notes and were able to recall more information than their phone-using counterparts” (Allen). Clearly phone use affects how students pay attention during class. Kelly continues by showing a subsequent study that had similar results. “Students who did not use their mobile phones, or used them for class-related content, earned higher grades and scored higher on information recall than students who used their phone for unrelated purposes” (Allen). Evidently phone usage directly correlates with success in school. As american children increasingly get phones at a younger age, it distracts them from learning. This early technological stimulation coupled with the ineffective way of teaching instills a lack of attention in academics at an early stage in the learning process. Another problem is the continuous feeling in graduates that the content we learn is useless. Schools are intended to prepare kids for the outside world. A world that is changing very rapidly. Journalist Ben Renner highlights the biggest problems with educational content. “Researchers found the average educated American forgets about 40% of what they learned, and uses just 37% of the knowledge and skills in their everyday lives on average” (Renner). This confirms that there is a growing belief that most of the content we learn in school has no value in the outside world and job market. This is very detrimental because students lose faith in the purpose of their education. Unfortunately, I did as well. As a student it feels illogical that I know the Headline of Pennsylvania 200 years ago and that chlorophyll makes plants green. But I don’t know how to manage my time and money properly or how to get a mortgage. Critics of educational reform may argue that there are much more pressing problems that require our time and resources. However in order to fix those problems we need to educate our kids about them. Our schools also need much more resources in order to see a successful update. I’m not just talking about improving school lunches. Our teachers should be given more updated teaching skills and much higher pay. Roughly half of teachers report feeling under great stress several days a week. Job satisfaction is at a 25-year low. And almost a third of teachers say they are likely to leave the profession within the next five years. Our teachers are underpaid and undervalued. This is a problem because kids thrive in classes where teachers take the initiative to help them and teach in different ways then our system requires them to. Unfortunately that number is low when teachers are struggling with time to teach their students at all, they feel they need to rely on the system that clearly does not work. Critics could also argue that school budgeting is spread thin. They could say that there simply isn’t enough in the National budget to pay for the millions of kids in America. Contrastly, there are major problems with government spending. Schools are given 64 billion dollars per year, while the military is given a whopping 712 billion. School’s budget could be doubled without making a dent in it’s military counterpart. Anti-reformists might also bring up the fact that school funding varies by state. Of course, this means people who have higher property values pay higher taxes; therefore, more money is funneled into their local schools. This creates large discrepancies in the quality of education, arguing that this could be standardized in any way is sugar coating reality. Shockingly, this isn’t always the case. For instance, take Texas. 65% of their education bill is paid by property owners. Jennifer Sapio, summarizes an article by the Texas tribune showing how poorer communities are taxed at a higher rate than their wealthy counterparts. “Fifty percent of poor districts, according to Swaby, are already capped at the maximum tax rate of $1.17 per $100 worth of property value. Meanwhile, sixty percent of wealthy districts only pay $1.04 per $100, a mere 4 cents above the legal minimum contribution to the education budget” (Sapio). Clearly you aren’t able to change the way the economic policies that fund local schools; yet, there is obviously a major problem in the way these policies are executed. The example of Texas is a mere leaf in the trunk of systemic discrimination working against poorer communities ingrained into the education system, masquerading behind the means of capitalism. SOURCE: U.S. Census Bureau 2016 Annual Survey of School System Finances It’s clear that the American Education system is outdated and doesn’t work for our Students nor our Teachers. It needs to be changed from it’s Industrial Age values, and money from the national budget must be diverted to aid this change. Otherwise we won’t be able to tackle large scale societal problems like climate change or systemic racism that pollutes the environmental, justice, and education systems. So my fellow students, now is the time to change it. Instead of giving up on schools and failing your classes, I challenge you to tell your parents, teachers, and politicians that this isn’t working. 2020 has been a year of much change. Now there needs to be even more.
https://medium.com/@dbarrett003/an-outdated-system-education-from-the-perspective-of-a-student-c3de564a020f
['Dylan Barrett']
2020-12-07 20:23:51.656000+00:00
['Kids', 'Funding', 'Educa', 'Technology']
1,014
Cyber defence — going beyond traditional frontiers
Written by Russell Haworth, CEO, Nominet A week ago we were made aware of the FireEye cyber attack. Days later we were witnessing an impressive global incident response. FireEye had suffered an attack by a highly sophisticated — suspected state-sponsored — threat actor. Hackers targeted and accessed its Red Team assessment tools, used to test customer security and replicate behaviour of attacks. The team at FireEye were swift to begin the process of issuing a number of countermeasures and communicating the breach. Just five days later FireEye released further information on a global campaign that introduces compromise into the networks of public and private organisations through the software supply chain. It was delivered through updates to the Orion network monitoring product from SolarWinds. The response that then followed not only included the impacted vendors and their clients, but governments around the world. Both the NCSC and CISA offered advice and begun action to mitigate the impact. The battle lines of cyber warfare have never been clearer than they have been in this last week. A suspected state-sponsored actor coming up against the collaborative intelligence and incident response of a cross-country defence force. That is a defence force which the UK is actively investing in, following the increased military investment announced in November and the creation of a new UK national cyber defence force. It would be an understatement to say that there has been a growing undercurrent of geopolitical cyber tensions. From Russia’s false information campaign in the 2016 elections, the recent Russia Report issued by the UK Government and other documented attacks linked to China against other governments. But what is especially interesting is how governments, vendors and the wider industry pulled together. By truly working in concert, the swiftness of the response matched the audacity of the attack. From a Nominet perspective, it goes to show the critical part we all play in defending our nation and the critical national infrastructure it relies upon. Protecting .UK and delivering the Protective Domain Name Service (PDNS) on behalf of the National Cyber Security Centre (NCSC) for our part. Also, with critical national infrastructure, increasingly more IoT technology is being integrated and consequently the number of vulnerabilities and the scale of potential attack is widening. This too must be incorporated in our protective landscape. Encouragingly, these latest incidents show that while both attacks themselves are multi-layered, so is our response. That’s the future of resilience. Above all cyber defence is no longer in the technical ‘weeds’. It is part of our everyday lives and can have a huge impact. When we’re defending public services, it’s not only the government that’s being protected or individual hospitals, but also individuals — the doctors, the nurses and the patients. It is me and you. And that’s why collaboration is paramount. Originally posted here
https://medium.com/digital-leaders-uk/cyber-defence-going-beyond-traditional-frontiers-9db01deaefd0
['Digital Leaders']
2021-01-07 15:41:55.003000+00:00
['Cyber Defense', 'Cybersecurity', 'Digital Transformation', 'Technology', 'Hacking']
1,015
From the FAQ: What Is a Botnet?
When I joined Salad, I had no clue what cryptojackers, botnets, or black hat hacking were (outside of Deus Ex, that is). There be hijinx in this digital Wild West of ours, and it’s not all in good fun. Every day, internet users face myriad threats to their privacy, hardware, and even agency over their computers. Today’s villain is the much reviled botnet — a sinister practice that has snuck into the Blockchain world but whose roots go back to the dawn of the internet. Let’s get into it. How Do Botnets Happen? A botnet is a network of infected computers used to perform some malicious task. By building a critical mass of computing power, the organizations and people behind them can ply the captured hardware to their nefarious ends. To make one, baddies distribute malware that gives them access to your PC or Internet of Things devices. Botnets aren’t choosy; they’ll take over smart TVs, home security systems, or that Amazon Alexa you gifted to grandma. What Are They Used For? DDoS attacks incorporate many elements into a holistic attack on a network. (Credit: @EpicTop10) Of the many illicit uses for a botnet, the most common application is the dreaded distributed denial of service (DDoS) attack. This tactic involves pummeling a target website with a dizzying volume of requests sent by machines in the thrall of a botnet. DDos attacks can overwhelm a site to the point of shutdown, or at least tie up its security resources. While their hopeless target fends off the torrent of messages from the attacking botnet, hackers can mount concurrent attacks undetected in the ensuing confusion. Such distributed assaults are difficult to counter. Because botnets mask incoming requests as organic traffic, it’s nigh impossible to trace the attack to a single location. Any Connection to Crypto? Botnets predate blockchain technology and today’s cryptocurrencies by a few decades. Yet many people erroneously assume the two are related due to the rise of cryptojacking, a tactic where hackers draft your PC into a botnet to use its power to mine cryptocurrency. The History of Botnets The history of prominent botnets is a sordid list of malicious hacking. Some of the biggest botnet attacks of the past twenty years, and consequently some of the most well known, are: Each of these botnets affected millions of users. Hackers stole personal information, launched DDoS attacks, and faked advertising traffic. If you want to read up, check out EC-Council’s breakdown of the biggest botnets since 2000. How Do You Stop a Botnet? In the middle of an attack, enterprise targets can only hope to mitigate the damage by recognizing botnet activity as fraudulent, seeking help from their ISP, or taking proactive measures at server level. When the aforementioned 3ve attack nearly toppled the digital advertising industry, it took the combined efforts of WhiteOps (a white hat hacking organization), Google, and a bevy of other tech companies to curtail the bot. How Would I Know if I Was on a Botnet? Understanding the steps in a botnet attack is critical to preventing them. (Credit: @Paula Piccard) The best defense is safeguarding your PC from joining the botnet in the first place. Once a botnet launches an attack, there’s little the average user can do to stop it. The good news: people are smart! If you avoid risky browser ad clicks and stay away from downloads in your email spam folder, you’re well on you’re way to safety. The bad news: botnet creators aren’t dumb either. Most sophisticated botnets try to conceal themselves, and the vast majority of users whose machines are infected will never know it. Unlike distributed computing networks like Salad, botnets backdoor permissions to commandeer your PC without consent. Botnet creators rely on malware to steal computing power from unwilling targets, taking extra pains to go undetected for as long as possible. Everyday Vigilance Always do your homework on the digital entities you encounter! If they seem vague about things like their location, or you find out they’re incorporated in the lost empire of Atlantis, maybe you ought to reconsider downloading their software. Fortunately, if you’re vigilant enough, there are measures you can take to minimize your chances of infection. A lot of this is just good digital hygiene: don’t download software or files from browser ads never download from emails without verifying the sender avoid sites with sketchy ad providers read verified user reviews before downloading software For more info on the warning signs and mitigation methods for us everyday internet denizens, Jack Busch has compiled an excellent breakdown on botnet red flags and ways to stay safe. Is Salad a Botnet? Running a botnet is super-duper illegal under U.S. law. If Salad was a botnet, my cohorts would all be sitting in federal prison (instead of getting crushed by yours truly in Age of Empires II). In fact, we tick quite a few boxes on the “definitely not a botnet” checklist. Most botnets aren’t incorporated in the USA, nor do they use their real identities. They rarely have well-heeled investors or warm and fuzzy reviews, and we’ll take the odds on whether they operate from sweet battle stations in Utah. For the discerning user, the best proof would be to root around our open-source code on Github. There you’ll see directly how the magic happens — with nary a line of arbitrary code. Salad simply manages the relationship between your PC and the mining pools we use to make you mula.
https://medium.com/salad-technologies/what-is-a-botnet-and-is-salad-one-of-them-629e3a3aed1e
[]
2021-04-15 20:44:07.610000+00:00
['Technology', 'PC', 'Tech', 'Computers', 'Hacking']
1,016
🌎 Digital Diplomacy partners with The UN Brief
🌎 Digital Diplomacy partners with The UN Brief Focus on tech for good, the United Nations, and the digital age in multilateral fora. We’re excited to announced the launch of “The UN Brief” column here on Digital Diplomacy! Starting today and every Monday, Digital Diplomacy will publish The UN Brief, a weekly column by The UN Brief focused on technology for good, the tech debate at the United Nations and its agencies, as well as what the digital age means for government and citizens around the world. “At The UN Brief we believe in a world where international cooperation matters,” said Maya Plentz, founder and editor-in-chief of The UN Brief. “This partnership with Digital Diplomacy furthers our mission of shedding light in the digital transformation of multilateral organizations, the pervasiveness of new technologies that impact the negotiations of normative frameworks, peace and security, and the work of diplomats around the globe.” The first weekly column is about the World Health Organization’s newly-launched COVID-19 platform:
https://medium.com/digital-diplomacy/digital-diplomacy-partners-with-the-un-brief-923625ae1574
['Andreas Sandre']
2020-06-01 17:53:03.022000+00:00
['Social Media', 'Technology', 'Tech', 'Government', 'UN']
1,017
Will Quantum Computer Keep Up With Moore’s Law
Although transistors can’t get smaller, computers can get larger. Although not practical, large powerful computers can be useful. This can be for huge projects, shown by NASA supercomputers. On the other hand, computers must still progress for everyday use. To do this, innovative ideas must be conjured and created. This could allow people across the world to have a better quality of life. In 1998 an innovative, quantum computer was made for the first time. This has become enhanced since then, proving to be much more powerful. The key difference with a quantum computer is how the bits work. As said transistors either allow electrons to pass or block them. Depending on the function it performs, the information is given as, 1 or 0. This called a bit. In numbers, this information from transistors can form logic gates e.g AND, OR. These logic gates in numbers can perform basic functions. In normal computers, bits can be set to one of two values. However, in quantum computers, qubits take bits place. These can be in any proportion of both states (0 and 1) at once. This is called superposition. The problem is once observed the qubit must pick one of either value. 4 classical bits can be in 1 of ²⁴ combinations at a time. However, for qubits, they can exist as all of these values, at once. This is because each qubit can be both 1 and 0. This grows exponentially as the number of qubits grows. At 20 of them, they can already store 1 million values in parallel. Google’s quantum computer has 53 qubits. This means it can do ²⁵³ combinations at a time. For the worlds best supercomputer a difficult calculation could take 10,000 years. Whilst this quantum computer can do it in 200 seconds. Quantum Computer The only setback of these computers is their price and size. These computers take billions to make and are very large. This means we still have a long way to go for quantum computers to progress. This could be only available to large companies. Or if they progress enough could become commercial. This could make seemingly impossible everyday things possible. In conclusion, Moore’s Law may temporarily stunt computing progress. This would mean our steep improvement in the last few years may plateau. However, this does not mean the end of our progress. Innovative computers, e.g. quantum computers, could break through this plateau. These can allow for unprecedented levels of power. Being able to perform huge numbers of logical decision and arithmetic calculations. These could create solutions to huge modern problems. This could be medical, economics or communication-related. The number of qubits in quantum computers begin to increase more rapidly. This could defeat Moore’s Law’s plateau entirely. To see how useful innovation can be, go to buysmartbuycheap.com
https://medium.com/@t.o.mhealey1092/will-quantum-computer-keep-up-with-moores-law-6def544a5515
['Tom Healey']
2020-09-22 17:56:05.177000+00:00
['Future Technology', 'Computer Science', 'Progress', 'Quantum Computing', 'Computers']
1,018
Essential Roles and Responsibilities of Successful Business Leaders
Essential Roles and Responsibilities of Successful Business Leaders Innovative Leadership is my research interest. I studied prominent traits that attract us to exceptional leaders. I have been researching leadership in various settings for several decades. In one of the leadership and business articles, I introduced “Remarkable Leadership Traits for Technology Executives”. My research revealed the difference between ordinary and distinguished leaders. In this post, I want to share 11 essential roles and responsibilities of exceptional business leaders. Learning these roles can provide useful insights to aspiring business leaders and entrepreneurs. You can find more of these stories on my News Break profile.
https://medium.com/illumination-curated/essential-roles-and-responsibilities-of-successful-business-leaders-c18640297b47
['Dr Mehmet Yildiz']
2020-12-28 15:54:04.796000+00:00
['Business', 'Entrepreneurship', 'Startup', 'Technology', 'Writing']
1,019
Technology is the new luxury
Technology is not how we understood it to be. Luxury is not how we defined it to be. Our definitions of technology and luxury have changed and they are rapidly converging. This is not about the increasing role of technology in the luxury industry, but about how technology in itself has increasingly started wearing the luxury cloak. In Q4' 2017, HSBC analysts strengthened their ‘Buy’ ratings for Apple shares. They had an interesting reason behind that Buy rating: “Recently, with an offensive retail strategy and in some cases comparable price points, Apple has competed with the likes of Louis Vuitton, Cartier or Prada which made us raise the question: is Apple actually a luxury stock? Yes.” One of the surprising (but not unexpected) findings of the 2018 Hurun Report (which surveys HNWIs in China) is Apple topping the ranks of brands given as gifts by HNWIs). Who did it beat to get up there? The likes of Louis Vuitton, Bvlgari and Chanel. In 2017, for the first time, Tesla sold more units of its premium Model S than the Mercedes Benz S-Class and the BMW 7-series. They had already achieved this feat long back in the US, where the Model S has been the leading car model in the luxury segment for quite a few years now. This is not even a technology brand entering a previously non-technology domains. This is about a new car brand upstaging the leaders in the premium segment. You can argue as to what the role of technology is in cars, which are by nature one of the greatest technological inventions? Yes there is as Tesla redefined and repositioned luxury car ownership through environmentally-friendly, sustainability focused and efficiency-driven products. There was no heritage or legacy in automobile manufacturing, but a strong intent to redefine the codes of luxury in car ownership. These are just the bigger and more talked about examples (Apple, Samsung, Tesla etc.). Our definitions of luxury are being changed at the core, and most of the time through small, deliberate and impactful changes. Another interesting aspect is that the evolution of technology into luxury is a gradual shift that moves across the world. A change that has already happened in Europe may take a while before it gains an acceptance in Asia. Take the example of Starbucks — in the UK we don’t skip a beat when we walk into a Starbucks store, while it is still a high form of luxury in Asia. The world’s largest Starbucks store was opened in Shanghai towards the end of 2017. It is the epitome of luxury shopping (forget coffee drinking) — can cater to around 7000 customers a day, the longest coffee bar in the world also has a ‘Pairing Bar’ (spewing out advice on pairing coffee with food), symphony pipes, more than 1000 traditional Chinese seals (chops) that showcase the Starbucks story etc. At almost the same time, Starbucks opened its biggest store in Bangkok (Thailand) with the usual luxury connotations (two floors, hybrid espresso machines, Starbucks Reserve Experience Bar etc.) I am sure, although qualitatively, that Starbucks either hasn’t opened in any new stores in London lately, and even if they have, their stores are getting smaller and dirtier. The fact is simple — Londoners do not equate Starbucks with luxury. Will Starbucks ever open a takeaway store in Shanghai? Probably not. These winds of change are not unbeknownst to the traditional luxury manufacturer. Many of them are willing to risk their equity and hundreds of years of legacy to launch technology products. The Apple Watch didn’t take off but it did rattle cages. In Q1' 2017, the veritable Montblanc entered the smartwatch segment with the launch of the Montblanc Summit. The summary of the reactions and analysis were “nothing in the product to justify the Montblanc name and price tag”. Traditional luxury depended on exclusivity as the key factor for appeal and for justifying high prices. Coupled with it was craftsmanship, legacy, history and deep tradition. The new definition of ‘luxury’ is slowly shifting away from these codes, and is driven by the need to have a high-quality life, more obsession with the self, new set of hygiene needs, wide and continuous access to knowledge, shrinking of the world in terms of distance and higher levels of cultural intermingling. The word ‘innovative’ has pushed its way into the luxury lexicon rapidly in the last few years. This single word, which defines the evolution of new luxury, has led luxury houses to start engaging, investing, acquiring and rewarding startups who are building technology-driven products and services for their industries. Although it may sound overtly simplistic to credit a single word, but it is this very need for innovation that technology satisfies more than anything else. The list of the 30 startups selected for the 2018 LVMH Innovation Awards make an interesting reading. The key points highlight why technology is forever going to continue redefining luxury: A mobile and universally connected olfactory sensor that can identify and classify odours A new category of materials that combine copper and glass or gold and glass into a single material A solution that combines fashion and technology by designing accessories and clothing that can be customised immediately In sum we have technology that has the potential to create new fragrances, new materials for luxury manufacturing and new lines of customisable clothing and accessories. 5 years ago this would have been unthinkable. The new definition of luxury has now also moved into subscription channels or on-demand as a business model. In the past, sampling luxury was akin to sniffing perfume strips, getting hold of a sample while attending a launch or simply getting access to limited edition products before they hit stores. But now you can get them via monthly subscription boxes to your home. This again exemplifies the trend of making luxury more convenient and personalised. How has technology become so synonymous and harmonised with luxury? This is because of quite a few influential factors, and all of them are related to our altered definitions of luxury: We don’t view luxury as a mode of celebrating disparate, fragmented, once-in-a-while occasions in our lives (e.g. birthdays, anniversaries, promotions, weddings, arrival of a child) anymore. Luxury is now increasingly a state of being (and in our own unique ways) Lack of money does not inhibit us from experiencing the luxurious and the expensive — the availability of cheap credit (aka debt) has made access to luxury much easier Exclusivity is a very poor differentiator of luxury now — in the true sense of the word, exclusivity does not matter anymore. Yes we do still queue up overnight to buy the next generation iPhone and add our names to the never moving Birkin bag waitlist, but it is not the end of the world Luxury now comes in smaller, ‘mini’ versions for us — In Shanghai, an overpriced Starbucks coffee is a mini luxury, in London it would ordering an Uber Exec to arrive in style at a luxury club or house party, in Delhi it would be hanging out in the latest Mexican restaurant that has opened in the luxury mall next door Luxury is increasingly not seen only as a gift anymore. It is for our self consumption. Luxury is increasingly not for collectors or connoisseurs. It can be for anyone who has the money, interest or curiosity (or has the connections to break into super exclusive clubs) Luxury is not about owning inanimate objects anymore. It is about buying something that you can continuously use, which in turn enhances the quality of one dimension of your life (for example the iPhone, the AirPods, underfloor heating in your bathrooms, Bose or Blaupunkt speaker systems, an annual multi-brand airline lounge pass etc.) Technology enables this transformation of the definition of luxury. It has made luxury ubiquitous, shareable, able to be experienced without ownership, bite-sized, flexible, customised, convenient and more accessible. All of this traditional luxury was not, but new luxury is.
https://sandeepdas9179.medium.com/technology-is-the-new-luxury-f4ca2359f6fa
['Sandeep Das']
2018-06-30 17:09:57.945000+00:00
['Brands', 'Branding', 'Technology', 'Luxury', 'Strategy']
1,020
Membuat Optical Character Recognition (OCR) Sederhana menggunakan Python
Get this newsletter By signing up, you will create a Medium account if you don’t already have one. Review our Privacy Policy for more information about our privacy practices. Check your inbox Medium sent you an email at to complete your subscription.
https://medium.com/milooproject/membuat-optical-character-recognition-ocr-sederhana-menggunakan-python-1fad7d427447
['Fahmi Salman']
2020-10-29 06:33:17.350000+00:00
['Image Processing', 'Python', 'Ocr', 'Technology', 'Tesseract']
1,021
My 2018 In Summary
Just as 2019 is starting, I summarize my past year. Last year I wrote my first “Year in Retrospective” summary post, and it made me realize how diverse and interesting my year had been. So I decided to repeat the experience and share my summary of 2018. Here we go — Community Activities Following my passion for Software, Electronics, Robotics and Making, I recently started a new Meetup Group called IoT Makers Israel. We had the first event in a new maker space that was just launched, and it was a blast! Another remarkable project for me is the Community Hours. A few months ago, I opened a weekly spot on my calendar for people who want to connect with me. I have between 4 and 6 calls every week, helping people with career decisions, getting started with blogging, public speaking and many other interesting topics. I had the opportunity to interact with many talented individuals and collaborate with them. For instance, a week ago I had a call with Charlie Gerard, whose work I have been following, and finally got to meet her. I also met Omer Raviv, who told me about his ambitious plans to bring a new debugging experience to the JavaScript world. Michael Hladky shared with me his work creating an RxJS Marble Diagram design language. Adi Polak and I started collaborating on Mixed Reality. And I could go on with more examples for hours… Blogging Challenge Inspired by a blog post from Sara Soueidan, I decided one evening to challenge myself and write a new blog post every single day for a month. This was during October, when I also got married, which made it even more challenging. For the most part, I managed to keep up to the challenge, and learned to write shorter and more focused articles, as well as to start and finish a blog post in the same day. I did lose much sleep over this challenge, but I am very happy with the results. Not only that, telling the story of my challenge actually inspired two more friends, Ire Aderinokun and Abraham Williams to take on a similar challenge: Public Speaking For the past few years I have been doing much public speaking, but most of my talks were solo talks. This year, I set myself a goal to start doing talks together with other speakers. I started by collaborating with Alex Castillo on a Virtual Reality session for ng-conf, and I think the result speaks for itself: Next, Kapunahele Wong and I worked on a talk together for AngularConnect, explaining about Injector Trees in Angular: We shared the “Behind the scenes” of the talk in a blog post, which explains what it takes to collaborate and work together on a “long-distance” tech talk — as Kapunahele lives in the states, and I live in Israel. I also had the opportunity to collaborate on “React Fiber and Angular Ivy” talk with Netta Bondy, which was a lot of fun and also taught me a lot about React. Netta is one of these rare people who actually read the source code of the framework they use, just out of curiosity — This year I also submitted a talk to BSidesTLV, an information security conference that was held in Tel Aviv. This is the first time I gave a talk in a security conference, and the talk was about breaking the cipher of an encrypted with Python. Most of my talks are related to either Front-End, Electronics or IoT, so this was totally stepping out of my comfort zone: Finally, I had my first experience where my live demo totally failed on-stage. This was during NgAtlanta, an awesome conference that encourages diversity among the speakers. Despite my demo failing, I got a lot of empathy and positive feedback from the attendees, and remember this is a positive experience: That time when my brain waves decided to be shy This experience is what eventually let me to blog about Live Coding and encourage others to give it a chance and not be afraid of it. New Engineering Skills I started the year experimenting with Augmented Reality on the Web: This technology was really fun to play with, and I even took it with me to the Finnish snow: Taking this photo at -15°C was quite a challenge Ast, TypeWiz and Angular I also started experimenting with Abstract Syntax Trees (AST), and how it can be use to create innovative tools for developers. This experiment eventually turned into TypeWiz, a tool that automatically adds missing types to your TypeScript code. I also gave a bunch of talks about AST and how it can be used, including my AngularUP talk this year, Let Your Angular Code Write Itself. I also spent some time exploring Ivy, the new rendering engine of Angular, and even managed to run it inside a minimal StackBlitz app. I’m pretty amazed by how the Angular team managed to simplify things. Usually, frameworks get more complex over time, but in this case, it seems like the opposite is true, and thus, I really enjoyed digging into it. BigQuery and Stretching the Limits Another technology that soaked my free time was Google’s BigQuery. It is a highly-scalable SQL database, which can run queries over vast amount of data in a matter of seconds. After using it in the “traditional” way for some projects (e.g. my Spanish Lesson Action). Then, after organizing a meetup event about the theory behind Bitcoin, I had this crazy idea: Harnessing BigQuery’s power to mine cryptocurrency. I was quite amazed when I managed to prove that this was actually feasible (though not very much profitable), and moved on to another adventure — Running complex AST queries on all the TypeScript code in GitHub using BigQuery. All in all, this was a fascinating journey for me, and the first time I worked with data sets of several terabytes, or as some call it — Big Data! Making, Electronics and 3D-Printing Last year I presented my IRL No-Internet T-Rex Game in Chrome’s annual Developer Summit. This year, I was invited to present again, so together with Ariella and Avi Aminov we tried to build a robot that plays the trumpet and failed. However, we had a backup plan and eventually managed to build a Web-Controlled Trumpet Playing Robot: Our backup plan was to use a speaker, together with some Web Audio love I spent several days prototyping and designing several mechanisms for the robot, such as a syringe based air-pump and a servo-controlled fingers, which thought me a lot about modeling complex, 3d-printable mechanisms. Let there be fingers 🖐🤖 Presenting the robot in the Chrome Dev Summit was quite an experience, as I created a small code editor and let the attendees hack the code that controls the trumpet, and the results were quiet surprising: Some More Projects I also created a JavaScript controlled Rock Tumbler, which I improvised in just a couple of hours from stuff I had lying around: Another fun project was turning my 3D printer into a plotter: Work Projects I work with Pavel on several side-projects that help us making a living. This year, we tried to create live video chat application for the Wix app market, but after spending some months on this without getting any significant progress with the product, we decided to abandon it. I did learn a lot about WebRTC from this project. We also tried to scale up our Social Media Stream app and start selling it as a standalone product, but were unsuccessful at building the sales funnel. We are now starting a new project called VoiceOn, and hoping to get a working prototype deployed for our first customer in a couple of weeks. Oh, and by the way, I no longer work for BlackBerry ;-) A special guest in our wedding day Looking forward to 2019! This year was really interesting, and as 2019 is starting, there are several things I am already looking forward to: Ariella and I are going for our Honeymoon to Japan. We will travel across the country for almost two months, and I just can’t wait to get this journey started. On the community front, I’d love to see the new IoT Makers meetup group growing. I have so many ideas for events and workshops, and got a very positive feedback on the meetings we had so far. I also want to keep blogging at least once a month, and to keep meeting interesting people through the community hours. Finally, I hope to see VoiceOn growing this year and becoming a successful business. Well, that’s all for now. Wish me happy times in Japan!
https://medium.com/@urish/my-2018-in-summary-3f6f6b469db8
['Uri Shaked']
2019-02-10 11:29:19.382000+00:00
['JavaScript', 'Makers', 'Blogging', 'Public Speaking', 'Technology']
1,022
Technology • Innovation • Publishing — Issue #125
Innovation Twitter’s Audio Spaces test includes transcriptions, speaker controls and reporting features — techcrunch.com ICYMI Earlier this month, Twitter announced it would soon begin testing its own Clubhouse rival, called Audio Spaces. The new product will allow Twitter users to gather in dedicated spaces for live conversations with another person or with groups of people. Walmart will do its first ‘shoppable’ livestream on TikTok, a holiday variety show to pitch apparel — adage.com Walmart’s one-hour show will include peeks inside influencers’ closets, a living room runway show and fashion-themed dance-offs, during which people can buy featured items without leaving TikTok. Amazon wants to scan your body to make perfectly fitting shirts — www.fastcompany.com For $25, Amazon will make you a custom T-shirt. Mojo Vision teams up with optics leader Menicon to develop AR contact lenses — venturebeat.com HT @nycmedialab Mojo Vision has teamed up to develop AR contact lenses with Menicon, Japan’s largest and oldest maker of contact lenses. #AR Forrester Predictions 2021 — go.forrester.com Explore Predictions 2021 to understand the trends that will shape technology, CX, marketing, sales, and other sectors in the year ahead. Read now. Gartner Top 10 Strategic Predictions for 2021 and Beyond — www.gartner.com This year’s Gartner Top Strategic Predictions highlights anticipated non-traditional business approaches like dna storage, factory and farm automation and freelance customer service. Technology FTC launches sweeping privacy study of top tech platforms — www.axios.com FTC set to announce wide-ranging study into privacy/data collection practices of ByteDance, Amazon, Discord, Facebook, Reddit, Snap, Twitter, WhatsApp and YouTube. The move appears to be a wide-reaching inquiry into everything major tech companies know about their users and what they do with that data The big Google DOJ antitrust case probably won’t go to trial until 2023 — techcrunch.com The Justice Department’s historic lawsuit against Google is moving along — albeit very, very slowly. In a status hearing Friday, U.S. District Judge Amit Mehta set a tentative date for the case. The good news and the bad news for both parties involved is that it’s more than two years away. www.bloomberg.com ICYMI #Facebook #Apple techcrunch.com I am looking forward to bots being identified. (Hope they can do it properly!) TikTok app launches on Samsung smart TVs — www.businessinsider.com TikTok is going up against YouTube, whose viewership on TVs is growing. Publishing & Media How The NYC Subway Was Saved By A Typeface — www.youtube.com For much of its existence the New York City subway system was a mess of competing signage leaving unexperienced riders understandably confused. qz.com HT @gretchenrubin Great newsletter issue from Quartz on Book Covers w Best Cover Archive fascinating history and analysis I’m a Romance Novelist Who Writes About Politics — And I Won’t “Stay In My Lane” — www.oprahmag.com HT @publishingtrend Is there such a thing as an apolitical romance novel? (Or any book?) PW’s 2020 Person of the Year: The Book Business Worker — www.publishersweekly.com As it should be. Burger King France Is Donating Its Instagram Posts to Independent Restaurants in Lockdown — www.adweek.com Burger King in France has given over its Instagram channel to independent restaurants — giving them the equivalent of free advertising until Jan. 21, when they will be allowed to open once more. #WhopperAndFriends How Vogue’s international approach to audience data helped it reach record readers — digiday.com #magazines BTS really can do anything. Now it’s got print magazines flying off the stands — www.cnn.com With their recent BTS cover stories, Variety, WSJ. Magazine and Esquire each ended up going back to the presses to print more. Variety printed 30% more copies than usual of its Grammy issue, which featured BTS on the cover, and created a digital version for sale. www.pressgazette.co.uk #newsbusiness mondaynote.com Google and the French media made a deal. Per media, without them, Facebook would be MySpace and Google the Yellow Pages. To put things into perspective, €150m/year is about 12% of the revenue of the entire legacy French press. #Google #media Collaborative between journalism groups seeks to start 500 local newsrooms in three years — www.poynter.org The Tiny News Collective says it will provide participants the tools and resources they need to start their own local newsroom New Ad Fraud Scheme Highlights a Growing Problem for Streaming TV — www.wsj.com RT @michellemanafy A new Ad Fraud Scam, which @OracleDataCloud has dubbed “StreamScam,” took advantage of flaws in streaming-TV ad-serving technology & the supply chain to fool marketers into paying for ads that were never actually seen by viewers on real devices and apps. Average U.S. Streaming Consumer Uses Seven Content Services, up from five in April 2020 — worldscreen.com #streaming Warner Bros.’s HBO Max News’ Impact on Advanced TV — www.groupm.com HBO and HBO Max’s total content budget might amount to $6–7B in 2021. For context: the value of a yr’s output from Warner Bros.’ theatrical biz before AT&T acquisition in 2018 typically amounted to $3–4B of cash costs w about half attributable to the U.S. BBC Studios to Launch Ad-Free Subscription Streaming Channel — worldscreen.com BBC Studios is set to launch a new, ad-free subscription streaming channel, BBC Select, in early 2021 on Amazon Prime Video Channels and the Apple TV app. #TV www.axios.com Spotify signs exclusive podcast deal with Prince Harry and Meghan Markle — finance.yahoo.com ICYMI The Duke and Duchess of Sussex are the latest to sign an exclusive podcast pact with Spotify. #podcasting High-Energy X-Rays Reveal the Secrets of Ancient Egyptian Ink — www.wired.com Scientists used high-energy x-rays to analyze 12 fragments from ancient Egyptian papyri revealing secrets of ink & paint techniques developed & used well before the 15th century. Resources & Opportunities twitter.com RT @sethasfishman It’s that time again: I’m looking to rep more amazing webcomics. I’m looking to rep BIPOC artists. I love humor and awe and emotion and aww. I am up for adult, ya, mg, picture books. Nonfiction or fiction. I want a POV. I want a story. I am searching. I AM SEARCHING FOR YOU. www.usajobs.gov RT @suzelibrarian Create a wacky Christmas opera with Google’s latest experiment — www.fastcompany.com Google has learned how to sing like famous opera singers. And it lets you share the joy.
https://medium.com/@ksandler1/technology-innovation-publishing-issue-125-8d12499e5e71
['Kathy Sandler']
2020-12-21 15:31:21.164000+00:00
['Streaming', 'AR', 'Innovation', 'Publishing', 'Technology']
1,023
Droneland
Droneland A drone for every purpose is on the way, and Stanford is feeling the buzz. BY MIKE ANTONUCCI Your life is controlled by mortgage payments because you’ve gone all in on the American dream: a cozy home with a fastidiously gardened yard in a gloriously hospitable neighborhood. Unfortunately, your place turns out to be directly under an off-ramp of the drone superhighway. You can’t tend your roses without smelling the descending pizzas; monogrammed underwear for the guy down the block gets airdropped on your front lawn. But then you hear the news of the day. That father and daughter who went missing on their backcountry hiking trip were found by a search-and-rescue drone operating with infrared technology in the dead of night. Successfully engineered for lengthy flight times, the robo-hero made a second trip to lower supplies before a wilderness assistance team even reached the scene. Welcome to the prognostications of the drone revolution, a double-edged sword of progress and turmoil. Stanford professors with expertise in unmanned aerial systems envision a drone-powered future that is overwhelmingly positive. The upside, they believe, will be a cascade of innovation — scientific, commercial and cultural — that rivals some of the biggest shifts in technological history. But along the way there are likely to be so many safety, privacy and nuisance concerns that it may feel like Armageddon for social tranquility. We’ve been told to expect flocks of automated parcel storks, with baskets landing on every doorstep. Amazon does not yet have regulatory approval to blot out the sun, but citizens have picked up the slack by making drones the latest gizmo craze, with hundreds of thousands bought during the 2015 holiday season alone. Or haven’t you yet seen one of those rotor-propelled “toys” at your local park or even hovering outside your bedroom window? There’s widespread anxiety about rogue drones imperiling passenger air traffic. At Stanford, precise rules have had to be fashioned to protect university grounds from becoming a flyover mecca. It can be difficult to sort through the hubbub and speculation that obscure a long-term perspective. But Mykel Kochenderfer, an aeronautics and astronautics professor at the national forefront of anticollision engineering, senses what’s coming. “These small consumer drones,” he says, “remind me of the homebrew computer era of the 1970s that led to the personal computer revolution. To a large extent, the technology has been driven by researchers and hobbyists. Start-ups are popping up all around Silicon Valley, and larger corporations are taking notice.” No one is being cute when they say Stanford is abuzz over drones; there’s a surge of interest among students of almost every discipline. The official student club has become yet another nexus for the university and Silicon Valley: It sometimes hosts speakers from the region’s germinal companies and serves as an incubator for interest beyond the classroom. Professors and graduate students are conducting a variety of drone-related research, some of which is being applied in government and industry projects. Perhaps most tellingly, Stanford’s 10-week skills-and-thrills course in designing, building and flying a drone — popular enough to often go simply by its catalog number, AA241X — is rooted in the kind of interdisciplinary collaboration that has become a touchstone of university activities. The knowledge students acquire can underpin careers that center not only on the many branches of engineering, but also on commerce, management or public administration. “Unmanned aircraft systems,” notes Juan Alonso, an aero-astro professor who is one of the AA241X instructors, “are first and foremost systems — a combination of multiple physical and functional elements that must work together in order to accomplish a complex task. . . . We place particular emphasis on making sure the students tackle all of the elements in the system and gain experience at all stages of the design, prototyping and operation.” GROUNDED: Curb your enthusiasm. [Photo: Stanford News Service] Stanford’s influence on how drones are made, used and perceived may become one of the university’s signature contributions to social change. Graduates are pouring into drone start-ups, especially in the Valley. And some of these companies, or at least their business models, are bound to burgeon. The impact may transform agriculture, policing, news gathering, filmmaking and photography, construction, emergency services and global exploration. Along the way, controversies are going to heat up about intrusions and misuses, about camera-and-sensor spy drones lurking at schools or peeking at what you do at an automated teller machine. Stanford faculty and alums from all the scientific and mathematical disciplines, not to mention those who will help shape the legislative and regulatory territory, anticipate a world in which drones can help us find parking, examine bridge corrosion and obey virtual barriers — “geofences” that program boundaries — when nearing forbidden locations. Perhaps the most intriguing factor is one emphasized by aero-astro professor Marco Pavone, who co-authored a recent article with its headline touting “Flying Smartphones.” Pavone, aero-astro professor Mac Schwager, ’00, and Ross Allen, PhD ’16, described aerial drones as on the way to changing consumer electronic technology with as much everyday impact as smartphones had on personal computing. Seen in that context, there’s much less confusion with drones as war machines. But for many people, merely hearing the word “drone” instantly evokes missile strikes that incinerate their targets. Researchers and entrepreneurs who focus on mainstream applications sometimes recoil from the word, referring only to unmanned aerial systems and vehicles. Nonetheless, drones that gather defense intelligence or serve other military functions may become far more prevalent tools; they’re reportedly being tested for underwater missions. In fact, the definition of a drone is still flexible, with room for nonflying unmanned ships. Predictably there will be periodic ferment about threats from drone smuggling, drone vandalism and drone terrorism. Pavone, Schwager and Allen envision aerial possibilities that range from the relatively mundane, such as monitoring freeway congestion, to the flamboyantly avant-garde, such as artistic or advertising displays formed by groups of drones radiating colored light. Why not a roadside billboard that you’re driving toward as it coalesces, fluttering into a floating, glowing message? You can hear the differing passenger reactions now: “Awesome” and “Keep your eyes on the road.” Pavone specializes in research on the analysis, design and control of autonomous systems. For drones that largely means “endowing the robot with flexibility in its decision-making capabilities.” Figuring out how to make drones “smarter” is crucial, says Pavone, because it’s assumed they will encounter far more impediments, safety cues and shifting conditions than engineers can anticipate. In other words, drones must learn to account for the unforeseen or inconstant. A video filmed in Pavone’s Autonomous Systems Laboratory demonstrates the issue about as entertainingly as possible: Allen, Pavone’s former PhD student, recorded himself “fencing” with a small drone in an indoor space that confined its range of motion. Allen would lunge with his blade, and the swordless quadcopter would maneuver and dodge. The drone was configured to make kinodynamic calculations — figuring out how to instantly react to a problem while also accounting for physical boundaries. Allen’s primary research objective was to show how planning algorithms (computational procedures) could animate a high-speed robot. And Pavone had thrown down a gauntlet. “I think he was the one,” recalls Allen, “who first brought up the challenge of having the quadrotor avoid a person trying to interfere with it. I think he humorously phrased it as something like, ‘Swing a stick at it, and if it dodges you, I’ll give you your PhD.’” As amusing as the video comes across, no one is going to suggest it’s the next Zorro. But it garners attention for good reason. Despite a host of developmental, business and governmental complexities, the prospect of experimenting or working with drones seems to have an especially adventurous allure. Even spasms of failure provide inspirational lore. Consider Peter Blake, MSM ’14, who arrived at the Graduate School of Business with a background that included flying Harrier fighter jets in Iraq for the U.S. Marine Corps. Which didn’t fulfill any of the qualifications for AA241X. When Blake asked in, Alonso interrogated him: “I said, ‘Well, do you know anything about aerodynamics; do you know anything about controls; have you ever done any computation?’” The response, Alonso says, was “No, no, no. But I’m a pilot.” GET SMART: Allen worked on giving drones the ability to react quickly in tight spots. [Photo: Courtesy of Autonomous Systems Lab/Stanford University] Alonso decided to see what the teamwork would be like if Blake and another GSB student were planted in a thicket of engineers. Alonso remembers telling Blake, “Initially, your job is to be the pilot for the team, because you have to fly it around remote-controlled and then turn it on so it goes autonomous to see if it works, right?” And then? “He crashed the drone four times in his first five flights.” Blake’s sense of humor was more durable than the drone, which he says took on a Frankenstein-created appearance from all its duct tape and epoxy repairs. Hard to believe he kept $30 million military planes in the air, he quips. Even so, the group dynamic was strong: In a class competition to perform a simulated search-and-rescue mission, Blake’s team won. In terms of practical training and cross-disciplinary effort, Blake looks back on AA241X as “very representative” of what he went on to experience in the budding drone industry. After working as director of flight operations and client services for the San Francisco start-up Skycatch, he relocated to Denver and joined Aeryon U.S., a provider of small unmanned aerial systems, as director of worldwide client solutions (flight demonstrations, training and product implementation). He says drone businesses are still in a nascent phase, crafting their strategies. Stanford, he says, “is in a great position to influence how the industry is shaped from a technological standpoint.” There are a number of academic and commercial hot spots for drones around the United States — Stanford faculty are quick to point out the nationally dispersed momentum in the field — but there may be an extra cachet to the theme of “Silicon Valley meets aviation.” That’s because, as Alonso explains, “It’s not just about some people building the airplanes or building the vehicles. It’s about the sensors; it’s about the vision; it’s about the decision-making; it’s about the data science; it’s about the onboard and off-board computing capabilities.” Stanford’s role as “catalyzer for the energy that’s in the Silicon Valley,” he says, makes it one of the world’s leading spots for the drone revolution. Kochenderfer, ’03, MS ’03, is director of the Stanford Intelligent Systems Laboratory, whose core focus is on “decision-making under uncertainty.” Before returning to the university as a professor in 2013, he was at the Massachusetts Institute of Technology’s Lincoln Laboratory, where he instigated the ongoing development of a major advancement in an international collision avoidance system for manned aircraft. That work is now being adapted for drones, and his research interests include the computational methodologies for driverless cars. “How do you make good decisions when you are uncertain about the current state of the world, how the world will evolve and when you have multiple competing objectives?” Kochenderfer’s students undertake that thinking. They go by a lab-based nickname that’s boringly spelled as SISLers and rousingly pronounced as “sizzlers.” It’s that idea of competing interests that gives drones their image as a double-edged sword. In late spring, for example, Menlo Park made the news for having more commercial drones (176) registered with the Federal Aviation Administration than any other U.S. city. By late summer, the local headline was that drones had been banned at all city parks. Across the country, the media had a field day when a woman blasted a drone out of the air with a shotgun when it flew over the property of her celebrity neighbor, actor Robert Duvall. That happened in Virginia, which shortly thereafter got another kind of attention: Virginia Tech’s involvement with UAS research instigated an experiment in which burritos were delivered to campus for a few weeks by winches lowered from drones. Kespry, a Menlo Park drone start-up that specializes in functions such as surveying and inspection for the construction and insurance industries, channels an inventive spirit into nitty-gritty business realism. “Drones Will Change Everything” is a prominent slogan on the company’s website, but business and policy vice president Gabriel Dobbs, JD ’14, MBA ’14, supplies the caveat: “Drones can do remarkable things but not everything people imagine they can do.” Indeed, founder and chief executive Paul Doersch, ’10, notes that he trekked through a slew of vineyards and orchards before determining that agriculture had less immediate potential than other applications. (“Farmers don’t jump on new technology quickly.”) But even the firm’s hard-core geekiness has a poetic aura. In addition to offices, Kespry has a shipping container filled with machines that sit on shelves while they “think” they’re flying. “The drones are dreaming,” say the Kespry employees with sly smiles. It’s more like a guided meditation: The drones are connected to physics simulators that attempt to replicate the challenges of real-world navigation. The drones go everywhere in whirring (snoring) reverie. Kespry’s day-to-day operations, which as of late August include a 2.0 version of the firm’s drone system, are headlined by proficiencies in flight time, wind resilience and data gathering. It’s also the kind of enterprise that provokes many of the questions about how drones will affect jobs, and in this case highlights the argument that aerial examination of, say, a sprawling quarry is a significant safety enhancement. Doersch, who majored in computer science, retains a sense of “the big picture” — of what’s happening to increase, with incredible precision, our knowledge of our surroundings. “We scan the physical world into the cloud.” Sounds socially and statistically gargantuan. Then add mushrooming recreational use and experimental initiatives and you get, yes, a regulatory muddlement. The fog at local, state and federal levels has been pervasive, but the last half year saw some significant clarification. As of late August, new commercial-scientific rules from the FAA eliminate the need to obtain special permissions before using drones for a wide variety of purposes that include education and research. Details are copious, but the essential provisions are these: The unmanned aircraft must weigh less than 55 pounds, observe an altitude ceiling of 400 feet and a maximum speed of 100 miles per hour, not fly over anyone not participating in the operation, not fly at night, and stay within eyesight of the operator (precluding for now the beginning of grand-scale package bombardment). Another notable requisite is that the operator be at least 16 years old and have a remote pilot certificate with a small-UAS rating (or be under the supervision of a certificate holder). Certain rules (such as the weight limit) overlap with those already in effect for recreational drone use, but consumers and hobbyists are far less regulated as a result of federal law enacted in 2012. Despite some controversy and puzzlement, restrictions such as an age minimum and training stipulations for operators don’t apply. How much this contributes to drone transgressions is an open question. The practicality of enforcing any rules anywhere is another enigma.
https://medium.com/stanford-magazine/droneland-c524d60ae88e
['Stanford Magazine']
2016-10-28 21:55:40.602000+00:00
['Features', 'Drones', 'Computer Science', 'Technology', 'Robotics']
1,024
Have We Reached the Phase of Smart Financial Crime Detection?
Have We Reached the Phase of Smart Financial Crime Detection? Financial Technology Why are financial crimes on the rise? Many people ask this question as crime-cases in the financial industry rise. Banks according to a McKinsey report¹ have lost millions of dollars in the last decade alone and this could worsen as criminals upgrade their financial crime tactics. Financial crime analytics can help financial institutions, investigators detect fraud, and money laundering, assess risk, and report on data to prevent financial crime. Each year, many cases of banking fraud² increase and despite stringent measures, losses continue to spike with financial institutions lacking concrete strategies to address this growing problem. Analytics help to pinpoint transactions that need further scrutiny, identifying the needle in the haystack of financial data. Photo by Bermix Studio on Unsplash With only a 1% success rate in recovering stolen funds, the financial services industry has realized that traditional approaches to dealing with financial crime are not working. Across the ecosystem, regulatory authorities, enforcement agencies, and financial institutions³ are working together to disrupt financial crime. This requires a proactive approach to predict and manage the risks posed to people and organizations and not merely to comply with rules and regulations. The challenges faced by financial institutions regarding money-laundering activities have increased substantially in the globalization era. Additionally, there is a rising menace of financial crime and counterfeiting. As money launderers become more sophisticated, the effectiveness of anti-money laundering policies is under heightened regulatory scrutiny. The probability of banks facing rigid penalties and reputation loss in case of shortcomings in AML management has increased. A good example of a tool used for financial crime detection is AMLOCK. This is the enterprise level end-to-end financial crime management solution. It integrates the best of anti-money laundering⁴ and anti-fraud measures to effectively identify, manage, and report financial crime. It provides various features that cater to profiling, risk categorization, transaction monitoring, and reporting requirements of financial institutions. Features that form part of this offering are in line with AML (Anti Money Laundering) regulations. In this article, I will explore current practices in financial crime detection, use cases and explore what the future looks in financial technology and fraud reduction. Overview Criminals are pervasive in their determination to identify and exploit vulnerabilities throughout the financial services industry. Their ability to collaborate and innovate necessitates a proactive approach towards responding to individual events, while disrupting crime networks. Combating #financialcrime is complementary to generating revenue. The big data analytical capabilities that enable a bank to personalize product offerings also underpin an effective approach to spotting and responding to criminal behavior. To out-pace fraudsters, financial institutions and payment processors need a quicker and more agile approach to payment fraud detection⁵. Instead of relying on predefined models, applications need the ability to quickly adapt to emerging fraud activities and implement rules to stop those fraud types. Not only should organizations be able to adjust their detection models, the models themselves should be inter-operable with any #datascience, machine learning, open source and AI technique using any vendor. In addition, to eliminate fraud traveling from one area or channel to another undetected, aggregating transactional and non-transactional behavior from across various channels providers greater context and spots seemingly innocuous patterns that connect complex fraud schemes. Artificial Intelligence For Financial Crime Detection Within financial institutions, it is not uncommon to have high false-positive rates that is, notifications of potential suspicious activity that do not result in the filing of a suspicious transaction report. For AML alerts, high false positives are the norm. The reason for this is a combination of dated technology and incomplete and inaccurate data. Traditional detection systems provide inaccurate results due to outdated rules or peer groups creating static segmentations of customer types based on limited demographic details. Photo by Jp Valery on Unsplash In addition, account data within the institution can be fragmented, incomplete and housed in multiple locations. These factors are part of the reason why alerts and AML are key areas to apply #artificialintelligence, advanced analytics⁶ and RPA. The technologies can gather greater insight, understand transactional patterns across a larger scale and eliminate tedious aspects of the investigation that are time-consuming and low value. AI can augment the investigation process and provide the analyst with the most likely results, driving faster and more informed decisions with less effort. AI based Intelligent Customer Insights Periodic reviews of customer accounts are performed as part of a financial service organization’s risk management process, to ensure the institution is not unwittingly being used for illegal activities. As a practice, accounts and individuals that represent a higher risk undergo these reviews more often than lower-risk entities. For these higher-risk accounts, additional scrutiny is performed in the form of enhanced due diligence. This process involves not only looking at government and public watch list and sanctions lists, but also news outlets and business registers to uncover any underlying risks. As one would think, such less-common investigations took the majority of the due diligence process because they typically required lengthy, manual searches and validation that a name was the individual or entity under review. With modern technologies like entity link analysis to identify connections between entities based on shared criteria, as well as #naturallanguageprocessing to gain context from structured and unstructured text, much of this investigation process can be automated. By using AI to perform the initial search and review of a large number of articles and information sources, financial institutions gain greater consistency and the ability to record the research results and methodology. Much like the AML alert triage example previously mentioned, the key is not to automate analysts from the process. Instead, AI automates the data gathering and initial review to focus the analysts on reviewing the most pertinent information, providing their feedback on the accuracy of those sources and making the ultimate decision on the customer’s risk level. Analytics for Financial Fraud Detection Innovation in the payments space is at a level not seen in decades. From mobile payments, to peer-to-peer payments⁷ to real-time payments, there are a growing number of payment services, channels and rails for consumers and businesses alike. But these myriad options also give fraudsters plenty of openings for exploitation, as well. Easy-to-exploit issues with these new payment services include their speed and lack of transactional and customer behavioral history. These issues put financial institutions and payment processors in a difficult position. If they block a transaction, they could negatively impact a legitimate user, leading the user to either abandon the platform or use a competitor instead. If the transaction is approved and it is fraudulent, it erodes trust in the payment provider and leads to a loss. Traditional fraud detection systems were designed for a relatively slow-moving fraud environment. Once a new fraud pattern was discovered, a detection rule or model would be created over a matter of weeks or months, tested and then put into production to uncover fraud that fit those known fraud typologies. Obviously, the weakness of this approach is that is takes too long and relies on identifying the fraud pattern first. In the time it takes to identify the fraud pattern, develop the model and put it into use, consumers and the institution could experience considerable fraud losses. In addition, fraudsters, aware of this deficiency, can quickly and continuously change the fraud scheme to evade detection. Case Studies of Financial Crime Technology Let us now explore some use cases of financial technology and how companies benefited in fraud reduction. 1. MasterCard To help acquirers better evaluate merchants, MasterCard created an anti-fraud solution using proprietary MasterCard data on a platform called MATCH that maintains data on hundreds of millions of fraudulent businesses and handles nearly one million inquiries each month. As the volume of data in its platform grew over the years, MasterCard staff found that its homegrown relational database management system lookup solution was no longer the best option to satisfy the growing and increasingly complex needs of MATCH users. Photo by CardMapr on Unsplash Realizing that there was an opportunity to deliver substantially better value to its customers, MasterCard turned to the Cloudera Enterprise Data Hub. After successfully building, integrating, and incorporating security into its EDH, MasterCard added Cloudera Search and other tools and workloads to access, search, and secure more data. 2. United Overseas Bank (Asia) The challenge UOB faced was the data limitations of their legacy systems. With legacy databases, banks are restricted by the amount of data as well as the variety. As a result, they miss key data attributes that are necessary for anti-money laundering, transaction monitoring, and customer analytics engines to work effectively. UOB established the Enterprise Data Hub⁸ as the principal data platform that, every day, ingests two petabytes of transaction, customer, trade, deposit, and loan data and a range of unstructured data, including voice and text. 3. Bank Danamon (Indonesia) Bank Danamon is one of Indonesia’s largest financial institutions, offering corporate and small business banking, consumer banking, treasury and capital markets. Bank Danamon uses a machine-learning platform for real-time customer marketing, fraud detection, and anti-money laundering activities. The platform integrates data from about 50 different systems and drives machine-learning applications. Using #machinelearning on aggregated behavior and transaction data in real time has helped Bank Danamon reduce marketing costs, identify new patterns of fraud, and deepen customer relationships. This is the Best Time to Implement AI for Financial Crime Detection Financial crime and corruption are at epidemic levels and many countries are unable to significantly reduce corruption. Regulators and financial institutions are looking to innovative AI technology to fix problems that have grown beyond their ability to solve with intuition and existing tools alone. To justify cognitive initiatives, financial services organizations need to show real return on value in such investments. IBM is able to demonstrate the value in a variety of use cases, as shown in the client success stories outlined in this white paper. A misunderstanding about artificial intelligence is the belief that it will replace employees. However, the financial crime analyst is and should always be an essential part of this process. AI, process automation and #advancedanalytics are tools that can perform analyses and tasks in a fraction of the time it would take an employee. Yet, the ultimate decision-making power still lies with those analysts, investigators and compliance officers for whom this technology provides greater insight and eliminates tedious task work. This augmented intelligence is the next phase of the fight against financial crime, and one that only together financial institutions, regulators and technology partners can win. What do you think? Is the current technology capable of addressing rising fraud cases and financial crime? Share your comments below and contribute to the discussion on Have We Reached The Phase Of Smart Financial Crime Detection? Works Cited ¹McKinsey Report, ²Banking Fraud, ³Financial Institutions, ⁴Anti-Money Laundering, ⁵Payment Fraud Detection, ⁶Advanced Analytics, ⁷Peer-to-Peer Payments, ⁸Enterprise Data Hub More from David Yakobovitch: Listen to the HumAIn Podcast | Subscribe to my newsletter
https://medium.com/towards-artificial-intelligence/have-we-reached-the-phase-of-smart-financial-crime-detection-9f3d98fb488
['David Yakobovitch']
2020-12-17 20:01:08.149000+00:00
['Opinion', 'Analysis', 'News', 'Artificial Intelligence', 'Technology']
1,025
New Helium App Features: Remote Assert, Search, & More 📱!
Spring has sprung and the Helium App has three great new features we’re excited to tell you about! Remote Assert Our first feature is Remote Assert! Highly requested by Hosts, Owners, and those with remote Hotspot deployments, Remote Assert removes the need to pair with the physical Hotspot. Instead, pairing can be initiated by the Hotspot Owner any time, anywhere. All you need to do is update the app to version 3.1.0, find your Hotspot, press Settings, then Update Location, and drop the pin for its new location. The cost to assert is the same as before. Bundled in with the new Remote Assert feature is the ability to add a Hotspot’s TX/ RX antenna gain (in dBi) and Height (in meters, above ground level) information to the blockchain. Why is this beneficial? By supplying these additional details, the Helium Network can better understand Hotspot placement and antenna upgrades. Note that this information won’t be used immediately by Proof-of-Coverage, but the network intends on using this in the near future. Search Ever wanted to quickly lookup which Hotspots are in a city or see the earnings of a Hotspot on the fly, without leaving the app? Now you can! With the new Search functionality in the App, not only can you search by Hotspot name on the Network, you can also search by name of the Hotspots you own! If you are a Patron with many Hotspots to your name, this will be the feature for you. Try it out yourself! Start typing any Hotspot name and it will filter down and suggest possible Hotspots. Tap on them and hit the Flag icon to Follow them (more on that later). You can also “fly to” any city on the Network to see if there are Hotspots deployed. Follow a Hotspot Our next (very exciting) feature also benefits Hosts, Owners, and curious spectators. Follow a Hotspot allows anyone to favorite, or follow, a Hotspot on the Network. We’ve often heard Hosts request the ability to view their Hotspot’s mining and activity on their own personal Helium App so they know exactly how their Hotspot is doing, be alerted when it goes offline, or simply update the Wi-Fi once they receive the Hotspot (Bluetooth Pairing required). Notice that it’ll say “Owned By…” with the Flag icon 🏳️. When you’re following a Hotspot, the flag will turn purple and will be added to the Followed Hotspots filter on the Hotspot page. As a Host, you can pair with a Followed Hotspot to update Wi-Fi and even run Diagnostics (Bluetooth pairing required). For security reasons, users won’t be able to Transfer Hotspot or Update Location if they do not own it. Feedback and Contributions Welcomed! And that’s it for our huge features update for the Helium App! We hope this improves the user experience for all types of Network users and we look forward to your feedback. For feature requests, bug reporting, or even trying your hand at contributing to code, please visit our Github page.
https://blog.helium.com/new-helium-app-features-remote-assert-search-more-ac96c584d665
['Coco Tang']
2021-04-21 17:56:59.307000+00:00
['Technology News', 'Technology', 'Crypto', 'Tech', 'Blockchain']
1,026
Achieving Idempotence with httpd in Ansible
Ansible is one of the most famous tools for automating tasks. Ansible is an open source tool that is used for configuration management and application deployment. It has widely gained popularity since it’s inception in 2012. There are various benefits of using ansible for automating tasks over any other language like python. This is because ansible provides a Resource Abstraction Layer(RAL). I won’t talk about it much today. If you would like to know more about ansible check out my article here. Today I would like to talk about idempotence in Ansible and how we can achieve it in case of httpd setup. What is idempotence? One of the most interesting feature that ansible provides is the idempotence. To give a better idea of what idempotence is, let me give an example. Let’s say you have a certain set of tasks. You first create a folder Create a file and then edit that file. Lastly delete the directory This is a fairly simple task. Let’s say you automate the first two tasks and later add the last task. If you again run this set of tasks there is an issue. The first two tasks have already been accomplished. So all we need to do is delete the directory. This property is called idempotence. Ansible has idempotence. This means that it can understand which tasks have already been done and skip them so that no time or resources are wasted on doing redundant tasks. This is a very important feature because in case of scenerios where the tasks that need to be done are large, this saves up time as well as resources. At times this also helps in avoiding errors. Let’s say you want to create a directory and the task is already done. Then if you try to redo this task the OS will throw an error because two directories can’t have the same name in the same location. This and various other problems don’t even occur when using Ansible . So what’s the problem today? Most of the modules have support for idempotence. But let me tell the problem we are going to deal with. This is the task to restart the httpd services. The problem with this piece of code is that it always run whenever we run the ansible playbook. However this is against the idempotence property. So when do we want this task to run? We want it to run whenever there is any change in the configuration of other tasks. Otherwise we can just skip this task. So how are we supposed to do this? Handlers in Ansible Handlers are like functions in any other programming language. We can allow them to run according to a certain condition that we can set using the notify . So we want the services to restart once we update the webpages when we are dealing with the httpd services. So Copy web pages is changed only when there is any changes in the webpages. When there is a change in the webpage the notify prompts the Start services to run just like a function. To allow this to happen we need to place the task within the handlers. So the services are restarted only when the webpages are updated. This helps in achieving the idempotence in the ansible when some conditions do not allow them on their own. There is not something out of the box we did here and some of the people might think that this is too much of a hassle to begin with. However this comes in very handy when there are thousands of tasks running and we are trying to save up every bit of resources we can. Thanks : )
https://medium.com/@2503arjun/achieving-idempotence-with-httpd-in-ansible-8e99e04a7475
['Arjun Chauhan']
2020-12-17 19:56:07.581000+00:00
['DevOps', 'Ansible', 'Technology', 'Automation', 'Httpd']
1,027
Evercity has been selected to join The Luxembourg Blockchain Lab
Evercity has been selected to join The Luxembourg Blockchain Lab Evercity ·Dec 7, 2020 Evercity has been selected to join The Luxembourg Blockchain Lab, after competing with 25 other international startups! The Luxembourg Blockchain Lab is a new initiative from five major actors of the Luxembourg fintech ecosystem: Infrachain, the LHoFT, LIST, SnT and LëtzBlock. The goal of the consortium is to create and nurture the blockchain ecosystem in Luxembourg, set up a landmark EU hub for blockchain research, education and industry projects, as well as develop industry capabilities to aid the deployment of the latest blockchain and distributed ledger technologies (DLT). We thank the organisers of the contest and are excited to help Luxembourg become the world’s central hub for new blockchain technologies! More information http://blockchainlab.lu/news/
https://medium.com/@evercity/evercity-has-been-selected-to-join-the-luxembourg-blockchain-lab-a3283beed220
[]
2020-12-07 16:12:01.116000+00:00
['Blockchain', 'Blockchain Technology', 'Blockchain Development', 'Startup', 'Luxemburg']
1,028
How to get and use Techline — Linus Surfshark Discount
Techline — Linus is a UK based Youtuber and his channels is all about Tech gadgets. Everything from smartphones to earphones, whatever gadget you need, you will find a review of it in his channel. But wait there’s more, in his latest video the youtuber moves away from gadgets and presents a software that he uses — Surfshark VPN. And not only presents it, but shares a big discount for the software. And it is super easy to get one. Get Techline — Linus Surfshark Discount: a very quick guide Go to Surfshark’s website Insert the coupon code “techlinetech” as seen below and complete the purchase. The other way is just go to this link with a coupon code already written in: Techline Surfshark coupon code. What does the discount presented by Techline offer? The discount includes 82% off Surfshark’s 2-year deal. With the discount you pay 1.74 EUR per month, which is 41.81 EUR for two years, instead of 237.36 EUR. Which is a huge discount for a high-quality VPN. When everything is getting digitised, the threats that lies behind it increases. Cybercrime acts happen everyday and a VPN is the way to avoid it. Techline — Linus recognises this threat and presents his viewers a way to avoid it with Surfshark VPN. Can VPNs actually Protect Us? Virtual Private Network allows you to access the internet being absolutely secure and private. It encrypts your traffic so nobody could snoop around what you are doing and also by changing your IP address lets you access any content you want on the internet. VPN is crucial these days if you want to be private online, if you want to secure your sensitive information from hackers (for example passwords) and if you just want to enjoy open internet with no boarders. Why Surfshark VPN should be your choice? Surfshark VPN is comparably a new one in the market but it is definitely going into it by full speed. It’s a high quality, easy to use, cheap VPN solution for you, your family and friends. Some features of Surfshark VPN: Strict no-logs policy. Surfshark is based in British Virgin Islands, so it doesn’t need to collect logs because of the jurisdiction. And it doesn’t. So basically not even the VPN providers knows your traffic, so you can be absolutely secure. Surfshark is based in British Virgin Islands, so it doesn’t need to collect logs because of the jurisdiction. And it doesn’t. So basically not even the VPN providers knows your traffic, so you can be absolutely secure. Kill Switch. Kill Switch will ensure if the VPN connection drops your activity wouldn’t be exposed. Kill Switch will ensure if the VPN connection drops your activity wouldn’t be exposed. Unlimited devices. No need for limiting yourself, Surfshark lets you connect your account to as many devices as you wish. So you can make every single device secure and even share it with your family or friends. No need for limiting yourself, Surfshark lets you connect your account to as many devices as you wish. So you can make every single device secure and even share it with your family or friends. NoBorders mode. That means that this VPN makes the internet open for you. You can watch whatever you want, you can access any countries library with many streaming devices and you can even use it in China where basically all the fun stuff are blocked. That means that this VPN makes the internet open for you. You can watch whatever you want, you can access any countries library with many streaming devices and you can even use it in China where basically all the fun stuff are blocked. Highest quality encryption and secure protocols. Surfshark uses industry leading encryption and most secure protocol as their default one. So be calm your privacy will be in good hands. Not a long time ago Surfshark also had an independent audit done for their extensions and the results shows that the VPN itself is very secure and high quality. You can read more about it in this article. …………………………………………………………………………………….. I could go on and on about what the VPN can offer and why it is important to have one overall, but you can read about it more by yourself here. Remember that your online privacy is key right now and don’t wait for the time when someone will hack you to understand that. Get private right now and surf calmly. I know Techline — Linus approves this. Get Surfshark VPN Techline discount Watch Techline — Linus Youtube channel here
https://medium.com/@gamderback/how-to-get-and-use-techline-linus-surfshark-discount-1010d1e8f5ae
['Gordon Samer']
2019-02-06 13:11:06.828000+00:00
['Technology', 'VPN', 'Software', 'Discount']
1,029
The Bots Boosting QAnon
Graphic by Samantha Weslin Maybe you heard about QAnon when those alleging voter fraud started throwing around the Dominion conspiracy theory. Or maybe you heard about it during lockdown protests. But what is QAnon? And how has it proliferated across social media so quickly? What is QAnon? QAnon is a baseless conspiracy theory claiming that a Satan-worshipping cabal of blood-drinking pedophiles runs the world. In the mythos of the theory, Donald Trump is portrayed as a messianic figure who has come to save the world from the ‘deep state.’ Followers of the conspiracy theory interpret ‘Q drops,’ or posts from Q, who is a shadowy figure purporting to be a high ranking US government official. The conspiracy theory has spread to over 70 countries and is particularly popular in Germany, Brazil, and the United Kingdom. Though the theory centers on the supposed sex-trafficking cabal, it encompasses a variety of other conspiracy theories. Many QAnon followers believe that JFK Jr. is still alive, that the furniture company Wayfair is trafficking children in overpriced cabinets, or that Joe Biden’s recent foot injury is simply a cover for an ankle monitor. While these theories seem (and are) far-fetched and ludicrous, the online movement has had very real effects in the offline world. Followers of Q have doxxed politicians and activists, launching online harassment campaigns that include death threats and publicization of sensitive information, like home addresses and phone numbers. Supporters have been amplifying the debunked Dominion voter fraud conspiracy, spreading disinformation that undermines democracy and faith in elections. Q believers have even committed acts of violence. The murder of a mob boss and a legal theorist, several kidnappings, and cases of property damage have been linked to individuals with ties to the conspiracy theory. How did QAnon spread so quickly? While QAnon conspiracies are created and spread by real human beings, bots have played a significant role in amplifying the reach of the conspiracy. A study of over 240 million tweets related to the recent US presidential election found that two QAnon hashtags (“WW1WGA” and “qanon”) were among the top 15 hashtags used in tweets by bot accounts. Tweets that included QAnon and other conspiracy theory-related hashtags were more likely to have come from bots compared to other election-related hashtags. An estimated 13% of the analyzed accounts using a conspiracy-related hashtag were run by bots. Bots didn’t just amplify QAnon hashtags, though. Bot accounts played a role in spreading news stories by far-right media networks that promote conspiracy narratives, such as OANN or Breitbart. These platforms get a significant boost from bots — over 20% of the accounts that shared their content were run by bots. Why do these bots matter? When bots amplify conspiratorial hashtags, they broaden the audience that is susceptible to falling down the rabbit hole of QAnon. Whether or not a Twitter user is drawn into the conspiracy theory by a bot or a real account, the very presence of bots increases the chance that a user will stumble across QAnon. Sharing a story from an established website adds a layer of credibility to the bots’ tweets, increasing the likelihood that users will take QAnon’s claims seriously. The amplification of QAnon on Twitter artificially inflates mainstream perceptions of the scale of the movement, even outside of Twitter. Being seen as a mass movement, rather than a tiny minority lends legitimacy to theory within the mainstream — 56% of Republicans think that QAnon is mostly or partly true. So why are these bot armies being deployed to aid QAnon? A recent takedown of over 400 troll accounts operated by Internet Research Agency, a Russian troll farm implicated in election interference in the 2016 election, found that the accounts frequently used #QAnon and related hashtags. This disinformation campaign spread dangerous lies; promoting falsehoods about the dangers of COVID-19, and encouraging distrust in democratic institutions to further Russia’s political interests. How do we stop bots from amplifying QAnon? Although deplatforming efforts by Twitter have been somewhat effective in removing the largest QAnon influencers, influencers and bots alike are able to return to the platform fairly quickly. Bot armies are cheap to launch, leaving platforms playing whack-a-mole. With technology like humanID, platforms can ensure that each user has a single digital account tied to their phone number, while keeping the user anonymous. Thus, bot armies become cost-prohibitive, preventing spamming and disinformation efforts. However, it’s important to note that the majority of accounts promoting Q are run by real people who believe in the conspiracy theory. QAnon is a symptom of the larger problems of distrust in the media and a climate of uncertainty. Tech alone won’t be able to deradicalize the thousands of people who believe in QAnon, and it won’t be able to stop the conspiracy theory. But it can prevent it from rapidly proliferating across the web, stopping radicalization before it begins. What’s humanID? humanID is a new anonymous online identity that blocks bots and social media manipulation. If you care about privacy and protecting free speech, consider supporting humanID at www.human-id.org, and follow us on Twitter & LinkedIn. All opinions and views expressed are those of the author, and do not necessarily reflect the position of humanID.
https://medium.com/humanid/the-bots-boosting-qanon-e1bfc20340c3
['Jessa Mellea']
2020-12-22 21:12:55.386000+00:00
['Conspiracy Theories', 'Twitter', 'Bots', 'Technology']
1,030
Part 107 Drone Operator Switches from DJI to Skydio and Increases Per-Pilot Revenue 37%
This blog is a teaser and summary of a longer-form written case study, available here at Skydio.com. Accurate Drone Solutions is a Drone Service Provider that generates high-precision 3D models for construction customers. One of the best parts of my job is talking to customers and listening to their stories about the way that Skydio 2 is changing the way they fly and in some cases, the way they fundamentally do business. Today I’m excited to share with you the story of Accurate Drone Solutions, a construction-focused Part 107 Operation in the Pacific Northwest of the United States. With 11 years of experience in UAS for construction, CEO Sam Delong has embraced Skydio and AI-powered autonomy to the point of deciding to replace their DJI fleet with Skydio 2 as way of improving their operations across the board: “When I started Accurate Drone Solutions, the [DJI] Phantom was my drone of choice but it often made complex projects extremely complicated to map correctly…After having the opportunity to run Skydio’s on complex intricate scanning projects I have been able to see first hand that their autonomy is the way of the future. As a business, Accurate Drone Solutions is phasing out DJI and aiming to integrate Skydio’s on all of our projects to not only provide extremely precise data to our clients but to bring autonomous drones into the construction industry and prove to large companies just how valuable the data they generate can be.” — Sam Delong, CEO, LinkedIn post on 11.12.2020 Challenge: Manual DJI drones take longer and generate less accurate maps Mapping with manual drones was time-intensive and generated inferior models than what is possible today. Because the DJI Phantom 4 Pro v2.0 is designed for manual flight, the Accurate Drone Solutions team faced a challenge whenever obstacles were present — for example on a construction site with a crane. In these cases, the team faced a tradeoff of safety vs. speed and quality. To avoid crashes, the team would either have to plan four grid flight missions around the crane, which increases data capture time by 4x, or perform the grid flight at a high enough altitude to clear the obstacles, which decreases the resolution of items at ground level. As a result, the team was burdened with an extensive data capture process that limited its total earning potential. Solution: Skydio 2 and DroneDeploy to capture data from any site, at any altitude Today, Accurate Drone Solutions uses the Skydio 2 paired with DroneDeploy. Thanks to the vehicle’s native onboard autonomy, the team does not need to worry about obstacles when they set up automated DroneDeploy flight paths, so they can generate 3D Model inputs from lower altitudes. Results: Better maps in ¼ of the time Accurate Drone Solutions’ results are an indication of the power autonomy has to make independent drone operators more efficient and profitable. Sam’s operation has reduced flight time by 66%, processing time by 25%, and secured a 10x ROI on switching from the DJI Phantom 4 v2.0 to the Skydio 2 for DroneDeploy mapping. Overall, Accurate Drone Solutions has seen a 37% uptick in revenue per pilot. Meanwhile, data quality is also improved. Thanks to Skydio drones’ native obstacle avoidance, Accurate Drone Solutions can perform photogrammetry flights at lower altitudes, where other drones cannot fly due to greater obstacle densities. As a result, the Skydio 2 is able to generate better quality data than its higher-priced DJI alternatives. Two DroneDeploy models captured by DJI (left) and Skydio (right). Delong points out the more vivid colors and greater resolution of the model generated by the Skydio 2. To learn how Accurate Drone Solutions is generating this incredible value for their own operation and their customers, please come check out the full case study, available for download on Skydio.com. And to hear from Sam directly, please check out his recent appearance on our Recent Webinar: Accelerating your Part 107 Business with Skydio.
https://medium.com/skydio/part-107-drone-operator-switches-from-dji-to-skydio-and-increases-per-pilot-revenue-37-9c3b3e211b64
['Guillaume Delepine']
2020-12-10 16:02:45.576000+00:00
['Photogrammetry', 'Construction Industry', 'Drones', 'Technology', 'Construction']
1,031
RT/ Stretchable ‘skin’ sensor gives robots human sensation
by Anirudha Majumdar, Alec Farid, Anoopkumar Sonar in The International Journal of Robotics Research As engineers increasingly turn to machine learning methods to develop adaptable robots, new work makes progress on safety and performance guarantees for robots operating in novel environments with diverse types of obstacles and constraints. This experiment is a proving ground for a pivotal challenge in modern robotics: the ability to guarantee the safety and success of automated robots operating in novel environments. As engineers increasingly turn to machine learning methods to develop adaptable robots, new work by Princeton University researchers makes progress on such guarantees for robots in contexts with diverse types of obstacles and constraints. “Over the last decade or so, there’s been a tremendous amount of excitement and progress around machine learning in the context of robotics, primarily because it allows you to handle rich sensory inputs,” like those from a robot’s camera, and map these complex inputs to actions, said Anirudha Majumdar, an assistant professor of mechanical and aerospace engineering at Princeton. However, robot control algorithms based on machine learning run the risk of overfitting to their training data, which can make algorithms less effective when they encounter inputs that differ from those they were trained on. Majumdar’s Intelligent Robot Motion Lab addressed this challenge by expanding the suite of available tools for training robot control policies, and quantifying the likely success and safety of robots performing in novel environments. In three new papers, the researchers adapted machine learning frameworks from other arenas to the field of robot locomotion and manipulation. They turned to generalization theory, which is typically used in contexts that map a single input onto a single output, such as automated image tagging. The new methods are among the first to apply generalization theory to the more complex task of making guarantees on robots’ performance in unfamiliar settings. While other approaches have provided such guarantees under more restrictive assumptions, the team’s methods offer more broadly applicable guarantees on performance in novel environments, said Majumdar. In the first paper, a proof of principle for applying the machine learning frameworks, the team tested their approach in simulations that included a wheeled vehicle driving through a space filled with obstacles, and a robotic arm grasping objects on a table. They also validated the technique by assessing the obstacle avoidance of a small drone called a Parrot Swing (a combination quadcopter and fixed-wing airplane) as it flew down a 60-foot-long corridor dotted with cardboard cylinders. The guaranteed success rate of the drone’s control policy was 88.4%, and it avoided obstacles in 18 of 20 trials (90%). When applying machine learning techniques from other areas to robotics, said Farid, “there are a lot of special assumptions you need to satisfy, and one of them is saying how similar the environments you’re expecting to see are to the environments your policy was trained on. In addition to showing that we can do this in the robotic setting, we also focused on trying to expand the types of environments that we could provide a guarantee for.” “The kinds of guarantees we’re able to give range from about 80% to 95% success rates on new environments, depending on the specific task, but if you’re deploying [an unmanned aerial vehicle] in a real environment, then 95% probably isn’t good enough,” said Majumdar. “I see that as one of the biggest challenges, and one that we are actively working on.” Still, the team’s approaches represent much-needed progress on generalization guarantees for robots operating in unseen environments, said Hongkai Dai, a senior research scientist at the Toyota Research Institute in Los Altos, California. “These guarantees are paramount to many safety-critical applications, such as self-driving cars and autonomous drones, where the training set cannot cover every possible scenario,” said Dai, who was not involved in the research. “The guarantee tells us how likely it is that a policy can still perform reasonably well on unseen cases, and hence establishes confidence on the policy, where the stake of failure is too high.” In two other papers, presented Nov. 18 at the virtual Conference on Robot Learning, the researchers examined additional refinements to bring robot control policies closer to the guarantees that would be needed for real-world deployment. One paper used imitation learning, in which a human “expert” provides training data by manually guiding a simulated robot to pick up various objects or move through different spaces with obstacles. This approach can improve the success of machine learning-based control policies. To provide the training data, lead author Allen Ren, a graduate student in mechanical and aerospace engineering, used a 3D computer mouse to control a simulated robotic arm tasked with grasping and lifting drinking mugs of various sizes, shapes and materials. Other imitation learning experiments involved the arm pushing a box across a table, and a simulation of a wheeled robot navigating around furniture in a home-like environment. The researchers deployed the policies learned from the mug-grasping and box-pushing tasks on a robotic arm in the laboratory, which was able to pick up 25 different mugs by grasping their rims between its two finger-like grippers — not holding the handle as a human would. In the box-pushing example, the policy achieved 93% success on easier tasks and 80% on harder tasks. “We have a camera on top of the table that sees the environment and takes a picture five times per second,” said Ren. “Our policy training simulation takes this image and outputs what kind of action the robot should take, and then we have a controller that moves the arm to the desired locations based on the output of the model.” A third paper demonstrated the development of vision-based planners that provide guarantees for flying or walking robots to carry out planned sequences of movements through diverse environments. Generating control policies for planned movements brought a new problem of scale — a need to optimize vision-based policies with thousands, rather than hundreds, of dimensions. “That required coming up with some new algorithmic tools for being able to tackle that dimensionality and still be able to give strong generalization guarantees,” said lead author Sushant Veer, a postdoctoral research associate in mechanical and aerospace engineering. A key aspect of Veer’s strategy was the use of motion primitives, in which a policy directs a robot to go straight or turn, for example, rather than specifying a torque or velocity for each movement. Narrowing the space of possible actions makes the planning process more computationally tractable, said Majumdar. Veer and Majumdar evaluated the vision-based planners on simulations of a drone navigating around obstacles and a four-legged robot traversing rough terrain with slopes as high as 35 degrees — “a very challenging problem that a lot of people in robotics are still trying to solve,” said Veer. In the study, the legged robot achieved an 80% success rate on unseen test environments. The researchers are working to further improve their policies’ guarantees, as well as assessing the policies’ performance on real robots in the laboratory. Videos A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab. Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. They address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception. Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods. The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments. In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics: Upcoming events MISC Subscribe to Paradigm! Medium. Twitter. Telegram. Telegram Chat. Reddit. Main sources Research articles Science Robotics Science Daily IEEE Spectrum
https://medium.com/paradigm-fund/rt-stretchable-skin-sensor-gives-robots-human-sensation-1da6c0c8ab64
[]
2020-11-23 10:22:52.099000+00:00
['Technology', 'Robotics Automation', 'Science', 'Robots', 'Robotics']
1,032
Verifying Documents With a Blockchain System
This article aims to present a proposal for document validation using Blockchain. Even with the world moving quickly, validation processes are often slow and manual. The use of Smart Contracts and Blockchain is ideal for this manual and unreliable scenario. This decentralized system stores the history of assets and transactions on a network with no central point of failure. More resistant to malicious attacks, Blockchain allows the acceptance of documents securely and digitally. Blockchain Sketch — Retrive from Upfolio Stimulus The world is changing faster as a result of accelerated technological development. Companies from all over the world invest heavily in innovation to create answers to the needs presented in the market. New technologies are part of this process, but it is important to emphasize that, more than opening different business opportunities, they provide a real revolution in society, enabling actions that were previously unimaginable and improving people’s quality of life. However, tasks that need trust usually do not follow this rapid progress. It is common to require to confirm that we agree with a document. Currently, this process is done through intermediaries that do manual assignments or digital systems that emulate tasks that we would do personally. We can say that even efficient, none of the solutions is agile enough for today’s world. In this scenario of mistrust, it is possible to identify the advantages of implementing a Blockchain-based system. This is a decentralized database that stores a history of assets and transactions between computers on a network. The Blockchain is transmitted to all nodes that make it not have a central point of failure. This makes it more resistant to malicious attacks, allowing the acceptance of documents securely and digitally. Proposal The concept of asset validation through Blockchain is not something new. Standard Chartered and DBS Bank already announced a successful Blockchain proof of concept to prevent that the same document that shows title to goods, being used for trade finance more than once. Our proposal is in line with these solutions already created. Although it proposes to use a public Blockchain environment to ensure validation simply. The best way to implement a verifying document process that uses Blockchain technology would be to develop a parallel system based on Smart Contracts. For this project, the asset would be the document and the transaction would be the act of verifying that document by a user. Smart Contracts are self-executing digital contracts that use technology to ensure that the agreements signed will be made. The validation of the contract rule is done through a Blockchain. This ensures that the contract cannot be changed. How Smart Contracts Works — Retrive from Intellias We think that it would be better to use a public blockchain network. In public Blockchains networks, access is completely open. This means that everyone can receive and send transactions from anyone in the world. The possibility to be audited by anyone is advantageous in a system that aims to validate a document. Among such networks, we understand that Ethereum would be the best choice. It is the largest network that allows the use of Smart Contracts. Before a transaction can be considered valid, it must be authorized by each of its constituent nodes through the chain consensus process. This guarantees the greatest equality and transparency in the system. The system has a web UI so that the user could insert, validate and search documents in a simpler way. This system integrates directly with smart contracts stored at Ethereum. For accessing Ethereum distributed applications, or “Dapps” in your browser you will need to install some kind of plugin like MetaMask. The extension injects the Ethereum web3 API into every website’s javascript context so that Dapp can read from the blockchain. It also lets the user create and manage their own identities. So when a Dapp wants to perform a transaction the user gets a secure interface to review the transaction before approving or rejecting it. Verifying Documents Through a Blockchain System Each document is converted to a contract where ownership and data are immutably stored. We use another smart contract to keep track of the multiple contracts. This maintains the history of the documents created. We could do this in a system outside the Blockchain(off-chain). But we believe that this could generate a possible point of manipulation of the stored documents. Implementation In this section, we are going to show how to implement a smart contract to verify documents at Ethereum. For that, we need to write code in Solidity. This is an object-oriented programming language for writing smart contracts. It is used to implement smart contracts on various Blockchain platforms, most notably Ethereum. Let’s first define the private variables that will store the data of each of our contracts. address private owner; string private filename; address[] private validators; mapping (address => bool) private validated; To represent the document ownership we use the address of the user who is creating the contract. In Ethereum and Solidity, an address corresponds to the last 20 bytes of the Keccak-256 hash of the public key. Since each public key is unique, we can then use it to represent ownership. We also use the addresses in an array to store users who have validated that document. To make it easier to check whether a specific person has validated that document or not, we use mapping. Mapping is generally recommended for this use case of a contract, which could have an unlimited number of documents and could be updated. The main advantage of an array is for iteration. But the iteration needs to be limited, not only for speed but also for the cost of operation. We need to use a constructor to make each document convert to a contract where ownership and data are immutably stored. This is a special function that is used to initialize state variables of a contract. In our case, it defines the environment variable of the contract owner, the file name and issues an event. This is an inheritable member of a contract that stores the arguments passed in transaction logs. These logs are stored on Ethereum and are accessible using the address of the contract until the contract is present on the Blockchain. event OwnerSet(address indexed oldOwner, address indexed newOwner); constructor(string memory _filename) public { owner = msg.sender; filename = _filename; emit OwnerSet(address(0), owner); } The main interaction that we have in this contract is to validate a document. So we have a function that receives the address from the validator and records the validation. As can be seen below, we store this through a array and a hashmap to facilitate responses by validators. function validateFile(address newValidator) public { validators.push(newValidator); validated[newValidator] = true; } As our variables are private, it is not possible to interact with them directly. So it is interesting to create external functions to return these values. This type of function cannot be accessed internally, only externally. For external functions, the compiler doesn’t need to allow internal calls, and so it allows arguments to be read directly from call data, saving the copying step. Since memory allocation is expensive, this type of solution becomes cheaper. function hasValidated(address newValidator) external view returns(bool){ return validated[newValidator]; } function getOwner() external view returns (address) { return owner; } function getFileName() external view returns (string memory) { return filename; } For the whole implementation please access our Github repository. There you will also be able to find how we implemented another smart contract called DataTrack. This one is responsible for keeping track of the multiple ValidateData contracts. Conclusion Companies from all over the world invest heavily in innovation to create answers to the needs present in the market. However, tasks that need trust usually do not follow this rapid progress. In this scenario of mistrust, it is possible to identify the advantages of implementing a Blockchain-based system. Our proposal is in line with the solutions already created. Although it proposes to use a public Blockchain environment to ensure validation more simply. We presented how to implement a smart contract to verify documents at Ethereum. For the whole implementation please access our Github repository.
https://medium.com/swlh/verifying-documents-through-a-blockchain-system-3d2eb867f88b
['Matheus Leal']
2020-11-05 20:54:15.892000+00:00
['Solidity', 'Technology', 'Ethereum', 'Blockchain', 'Smart Contracts']
1,033
Natural Farming or Who Cares About Rats?
A few years ago I had a unique chance to learn about “natural farming” with Buddhist monks. It was one of the first meaningful connection with Japan I had experienced in Taiwan. Early Saturday morning group of enthusiasts including me lead by a nun of Dharma Drum Mountain went to the rural district on the north of Taipei to visit a “natural farm”. The method of natural farming was established by a Japanese philosopher Masanobu Fukuoka. The method suggests bringing agriculture closer to wild nature conditions with a focus on sustainable growth process. And here is a difference with organic agriculture: organic supposes purity of products without focusing on the way of farming itself and the whole ecosystem, so purity can be achieved by very unnatural ways (like growing plants without soil or using some special fertilizers). Natural farming is all about cooperating with nature and copying the natural ecosystem. This method is also called “do-nothing farming” because it’s based on minimal human intervention in to the plant growing process. Consumers unwillingly encourage farmers to use chemicals and fertilizers by choosing the most beautiful fruits and vegetables. However, the plants grown in natural conditions are very differed by size and outlook and seldom have ideal shape or color. Farming area on the north of Taipei Natural farming excludes using fertilizers. For example, on the natural farm in Beitou a mountain spring water rich with minerals is used at the farm. The water also spread above the plants imitating raindrops for aeration of the water itself. Weeds and other dead plants are used as natural fertilizers so that nutrition stays within the ecosystem. Lines of beds are covered with weeds instead of plastic There is no spudding before seeding. Weeds are being slightly pulled and its roots remained in the soil for further aeration and fertilizing without damaging the soil structure itself. Outside of Taipei you often can see pineapple farms where farmers use plastic cover in between pineapple lines. This method is widely used to fight weeds. And the plastic film often remained in the ground for tens and hundreds of years polluting the soil. In natural farming, monoculture planting is avoided. Different kinds of vegetables and fruits are grown together as in the wild which often creates conditions for beneficial cohabiting. As a proof of achieved strength the nun showed us a banana tree which survived undamaged in typhoon while banana trees of neighbor’s farms (not “natural” ones) are all was seriously damaged. If no chemicals used, what about pests? Surprisingly, natural farmers simply do not care about it because nature already has its own response. Everything in nature is in balance and wild plants survive successfully. Yes, some of the plants can be damaged by a huge number of insects but it’s still beneficial for the whole garden in the long-term because survived plants become stronger. For the harmful insects there are natural enemies: as in the case with aphids and ladybirds. Monks illustrated some of the concepts of natural farming, I will call this example “The Rat and the Corn”. There is a rat that often visits a corn bed. A natural farmer welcomes the rat with a pealed corn ear instead of poison. So that the next time the rat won’t anxiously bite new corn ears but calmly will continue to eat the prepared one. When farmers try to kill rats with poison or rat traps the animals feel danger, so they are biting many vegetables on the way in anxiety. Moral of the story: farmers can try peacefully share a little part of their harvest with other creatures instead of fighting for every seed. This farming method contradicts the concept of the most popular and efficient way of intensive agriculture where farmers are like soldiers in the battle for harvest. However, if to look from the perspective of a desirable future ethical and sustainable world isn’t the Earth originally belonged to all creatures and not only to selfish humans? Why don’t we try to accept the interests of animals and plants in the farming process? Even if this way is much less efficient than intensive agriculture what if there will be customers who are willing to pay extra for these sustainable farming products?
https://medium.com/@storozheva-mari/natural-farming-of-masanobu-fukuoka-a3a581f63673
['Mari Storozheva']
2020-12-28 14:42:29.468000+00:00
['Agriculture Technology', 'Sustainability', 'Buddhism']
1,034
GET 2020|What will happen to education when new farmers go back to their lands?
“The past seven years have been the golden age of the education industry in China. Although we saw rapid growth, there must be black swans and gray rhinos.” Speaking at the Global Education Technology (GET) Summit held in Beijing on Nov. 23–24, Mei Chujiu, the founder and CEO of the summit organizer JMDedu, stressed that regardless of the pandemic, the learning demands will never change, “education is an industry that is immortal, we can make more efforts for this career.” Mei Chujiu, founder & CEO of JMDedu Mei Chujiu mentioned that since the COVID-19 spread across the country, many educational institutions go bankrupt or reorganize because of the suspension of offline learning gatherings. Some of them even have no solutions for parents and students. “The core reason that they are prone to the risks of COVID-19 is that they are who have inhering risks, and the COVID-19 is just the last straw that breaks the camel’s back.” Said Mei. Spurred by the OMO campaign during the epidemic, the number of Chinese online education users reached 420 million. The number slightly decreased to 380 million as the epidemic was under control and the schools reopened, accounting for 40.5% of the 900 million Chinese netizens. Wan Yiting, CEO of TAL Group From the perspective of Wan Yiting, CEO of TAL Group, online education space has been a Blue-Becoming-Red Ocean. As written in China’s Fourteenth Five-Year Plan, the government has shown great determination and support to develop online education. Plus, the Ministry of Human Resources and Social Security also has released a document, suggesting a new profession for the online education sector called “online learning partner(在线学习服务师)”. It is defined as a person who uses digital learning platforms or tools to help identify students’ understanding, inform and improve teachers’ instructional practice, and help students track their learning. “As of now, we’ve recruited more than ten thousand talents in this new profession.” Said Wan Yiting. Besides, the integration of technology and education has boosted many new trends. Wan Yiting believed regarded that the education industry is under the digital revolution. “Data-driven will become one of the core competitiveness in this space, which is an effective way to realize large-scale teaching following their aptitude .” Wan Yiting highlighted that artificial intelligence technology and 5G technologies would comprehensively enhance user experience and be more like realistic classroom scenarios. Seeing the continuous development of new infrastructure, Mei Chujiu put forward that “agriculture + education” is an emerging market in China with capitalization above $100 billion. “As manual labor has been replaced by automation, migrant workers have to think about where to go. In my opinion, the best way is to go back to their land and farm for family income, and they are new farmers. Today’s China is an infrastructure powerhouse. On the one hand, everyone everywhere can access new information, and various e-commerce firms have emerged. On the other hand, as the process of storage, logistics, and the cold chain has been continuously improved, shipping foodstuffs long distances is no longer a problem.” When agriculture and education intersect, Mei considered the opportunities will be embodied in two business forms: 1. Skill-based education and training focusing on new farmers 2. Escalating demand for Children’s education and training from new farmer families “When new farmers return to their hometowns, their urban life experience will affect their demand for improving the aesthetics and quality of children’s education, which means a large market space out of the metro cities.” Said Mei. “We must have confidence in ourselves.” Mei Chujiu deemed that China’s education rise in education informatization, public school system, and the after-school system is of considerable reference for most developing and developed countries. “Thus, Chinese education companies should go abroad while resuming normal business operations.”
https://medium.com/@edtechchina/get-2020-what-will-happen-to-education-when-new-farmers-go-back-to-their-lands-4e76a4b00ee9
['Getchina Insights']
2020-12-21 08:04:32.242000+00:00
['Farmers', 'Education Technology', 'Business Opportunities', 'Agriculture', 'Education']
1,035
2020: The Year of FinTech Bank Approvals
US Regulator Approvals (Image Credit — PaymentsJournal) With the focus of 2020 being on COVID-19 and most recently the US presidential election, it’s difficult to see through the headlines the critical developments in regulation of FinTech companies. There are numerous milestones in application approvals led by Varo Bank receiving its bank charter approval in July. Throughout the year other companies (from payments, cryptocurrency, and broad financial services) have had gains in regulatory approvals towards licenses that allow for operations independent of bank partners. These established fintechs would be able to create new business models, become more cost-effective, and launch new services at a faster rate. Here are 3 FinTech firms that have made headlines this year with regulatory approvals. SOFI SoFI (Image Credit — PRNews) SoFi (Social Finance), initially known for taking on student loans after the Financial Crisis and expanding to other financial services, is the most recent fintech to make the list for 2020. Per Reuters, the company received conditional approval from the U.S. Office of the Comptroller of the Currency (OCC) towards a national bank charter at the end of October. As part of the full approval, the FDIC (Federal Deposit Insurance Corporation) and Federal Reserve both need to perform an upcoming review of SoFi. This FinTech giant does currently provide banking solutions of deposits and loans, but does so through its bank partner. In banking partnerships, banks have the final word when it comes to compliance and program approval as any misstep could result in regulatory complaints and penalties. The review and decision process of banks can take months — fintechs with bank charters can minimize this lag time. “SoFi is on a mission to help consumers get their money right all in one app. This preliminary, conditional approval from the OCC is a testament to the mission-driven company we have built, the employees who help it grow, and the over 1.5 million members we currently serve.” Anthony Noto, CEO of SoFi (Reuters) SoFi had previously applied for a banking license in 2017 with its past leadership, but the process was cancelled after the former CEO departed. Updates and decisioning on their application should take place in early 2021, with an expected full approval by the end of Q2. SQUARE Square logo (Image Credit — Deliverect) Square, the well-known mobile payments processing company, gained conditional FDIC approval this past March (per Business Insider). This industry leader would be able to create a bank based in Utah once complete approval from all regulators goes through. Square will need to keep higher capital requirements than other banks, but the move adds a clear path forward of new revenue streams. The regulatory journey for Square was different than SoFi or Varo — their first step had been filing an ILC (industrial loan company) bank charter application three years ago (in Sept. 2017). This was met with criticism and scrutiny from groups such as the ICBA (Independent Community Bankers of America), who believe newly approved companies would be given leeway to avoid regulatory oversight. Square modified its approach — direct communication with regulators bridged the gap in revising its application for a better outcome. With the revisions submitted at the end of 2018 and this conditional approval, launching Square Financial Services in early 2021 seems realistic. The fintech is likely to focus on direct lending to existing merchants (based on monthly processing income) and then develop deposit relationships soon after. Square Capital already exists as a growing revenue segment (about 100K loans issued for over $650M in Q4 2019). Bank approval would provide better unit economics for the company. The improvement in margins can enable more working capital and business lending to boost a customer’s annual growth. Similar to SoFi, bank approval provides the firm an independent way to offer all banking services. As other payment companies increase their offerings beyond lending to banking, Square needs to be strongly positioned in delivering full financial services solutions. The vision is for long-term success by deepening the wallet share from existing clients, and acquiring new customers from other platforms with less services. For merchants that trust in Square during this pandemic, buying additional banking products can be an automatic ‘yes’. KRAKEN Kraken (Image Credit — FinSMEs) Kraken, a US-based crypto exchange launched in 2011, received its own approval from regulators in September of this year (per Forbes). Instead of a bank charter, the fintech received a SPDI (special purpose depository solution) charter from Wyoming. This is an alternative regulatory track for non-money transmitters that have complex payments flows, but don’t need state-by-state licenses. SPDI charter is the initial part of a broader process towards state supervision of a compliance path for cryptocurrency companies. Kraken would then be able to directly service large enterprises and its existing retail customers. In the US, the state of Wyoming has set itself apart with expertise in cryptocurrency, which has helped regulators design frameworks that allow for innovation but also prevent financial crime. The state’s industry guidelines in addition to Kraken’s extensive compliance programs combine to enable tracking of transaction activity and an open network. The potential to extend services internationally also exists as this process enables cross-border transfers in multiple currencies globally, which is critical for exchanges that are industry leaders. This is the most historic milestone for the cryptocurrency sector of FinTech and regulation in the US, to date. As crypto becomes a stable component within financial services, providing proper regulatory governance can propel mainstream adoption from consumers and businesses of all sizes. Regulation Continues to Open up in the US Our post on “Regulators Open Up to FinTech” discusses the new vision from the OCC This new decade is ushering a future in which US regulators embrace innovation in financial services. Instead of keeping emerging FinTech companies and their products outside of regulatory vigilance, granting license and bank charter approvals provides the ability for monitoring. The new director of the OCC believes this open vision is ultimately what’s best for oversight and consumer protection in the industry, not only with banking but towards cryptocurrency being in custody of banks. Other jurisdictions globally have approved open banking frameworks and regulatory sandboxes in the last 5 years — this is a chance for the US to catch up and attract new companies and market opportunities. // Join the FinTech community @FinTechtris for industry content & discussions (including trends, deep dives, and sector analysis) — signup for our newsletter today!
https://medium.com/fintechtris/2020-the-year-of-fintech-bank-approvals-a7d8426b1ea6
['William U. Morales']
2020-12-14 16:38:28.680000+00:00
['Fintech', 'Regulation', 'Finance', 'Technology', 'Banking']
1,036
Five Best Mobile Accessories Brands in Pakistan
In Pakistan, there are various tech companies who are managing Mobile Accessories. There is a wide-extended business of versatile embellishments which is common in pretty much every edge of Pakistan. The entrance to Mobile extras is never again a hard assignment. The interest of portable extras is developing step by step. There are little and enormous set ups and companies of versatile embellishments in each other market. As these adornments have now become a need. There are various versatile extras brands working in Pakistan. They have various roots. Some of them are solid and show great quality. Also, the others less dependable. Along these lines, We will cite 5 best Mobile frill marks in Pakistan. I will be positioning these brands after specifically examining and encountering it. Baseus Baseus is a consumer electronic brand under Shenzhen Times Innovation Technology Co. Ltd. It offers a wide range of products including mobile accessories. It has become a global brand. There are a number of Baseus products in the market now. Baseus is committed to providing each customer with the highest standard of customer service. Similarly, Baseus has a diverse range of mobile accessories. It includes different types of chargers, audio accessories, cables, holders, gaming accessories and much more. Although, the real specialty of Baseus lies in its phone cases. They have a whole range of phone covers. There are plastic covers as well as original leather covers with shock proof resistance. Anker Anker is the world leading brand in charging devices. Anker is utilizing Power Delivery technology to charge phones, tablets, and laptops at very fast speeds. It has a lot of charging devices to offer. Like Power strips, Car chargers, Desktop chargers, wireless chargers, wall chargers, and high speed universal chargers. They have a wide range portable power banks as well. These include pocket size portable chargers, ultra compact portable chargers, universal high speed portable chargers and power stations. River Song River Song is one of the leading mobile phone accessories brand. It is associated with IMG technology group. Its headquarters are in Shenzhen, China. It has a healthy infrastructure and an active supply chain management. The company’s main aim is constant innovation in technology. Due to which it has arisen so quickly on the world stage. In almost four years, it has proven to be one of the best brand. Now, its products and services are available in thirty countries. River Song provides a range of accessories in Pakistan. All of them are composed of excellent quality. These include earphones, different types of chargers, smart watches, data cables and much more. The reason for its success is that it provides value services to its users. The main aim of this company is to provide premium quality and excellent user experience. Xiaomi The organization Xiaomi has Chinese inceptions. In any case, there is a Xiaomi store in Pakistan, which is an eCommerce store. It bargains in MI Gadgets and portable extras. It is presently a well prestigious brand in Pakistan’s market. Xiaomi Store Pakistan was established in 2016 by a gathering of skill. Its essential target is to give high quality items and embellishments. Its internet shopping administration is one of the most solid. Xiaomi offers a wide range of mobile accessories in Pakistan. It deals in Phone cases, LED glass protectors, Power Banks, Mobile batteries, wireless charger pad, head phones, SD card jackets and a lot more. Audionic The Sound Master Audionic is one of the most splendid business division of Dany Technologies. It gives premium quality items to music darlings particularly. It is a well-eminent brand in Pakistan’s market for over 10 years now. It has given an intense challenge to Multinational monsters like Sony and so on. Since it has a wide cluster of sound gadgets with sensible quality and moderate costs dissimilar to Sony. Not simply that, Audionic gives some different embellishments too. Like Dual port chargers, Car chargers, Data Cables, Auxiliary links and mic. Qadardan Online shopping
https://medium.com/@qadardan.ltd/five-best-mobile-accessories-brands-in-pakistan-aa7ce8733efa
['Qadardan Ltd']
2019-12-21 21:12:46.151000+00:00
['Baseus', 'Xiaomi', 'Slanzer Technology', 'River Song', 'Mobile Accessories']
1,037
The Hair-Raising Feeling of Sending a Crypto Payment
Photo by Gabriel Matula on Unsplash For companies in the blockchain/crypto industry, it’s a constant challenge to pay team members in cryptocurrency. Every month, developers, marketers, and other freelancers submit timesheets along with their wallet addresses and the currency they wish to receive (we’ve found Ethereum to be the most common). This is especially true for companies that have ICO’ed and have a global workforce where it’s impractical to pay in a single fiat currency. The main problem they face is keeping track of wallet addresses and conversion rates, which leads to fear about sending potentially large payments to the wrong place. Today, their best option is to maintain a spreadsheet with all of the details, but of course, manual input is prone to error and takes up too much time. With the crypto industry growing larger each day, and with a global workforce becoming more and more connected, the problem will only get worse over time. If only there was an easier way to manage invoices and payments to be paid in cryptocurrency, companies would have a much better experience with paying in crypto, ultimately leading to greater adoption of cryptocurrencies like Bitcoin and Ethereum in the workplace. With thousands of companies now entering the blockchain and cryptocurrency space, there is a clear opportunity to meaningfully impact a huge number of people and usher in a new era of cryptocurrency adoption. Introducing Gilded: Crypto Invoicing for Freelancers
https://medium.com/gilded/the-hair-raising-feeling-of-sending-a-crypto-payment-in-the-workplace-plus-gildeds-one-weird-30d6cad55391
['Gil Hildebrand']
2018-05-10 17:11:02.143000+00:00
['Freelancing', 'Technology', 'Business', 'Bitcoin', 'Cryptocurrency']
1,038
Never Stop Learning. In Blockchain. In Life with Ameer Rosic
Ameer Rosic on the Speaking of Crypto podcast Ameer Rosic knows technology and what’s even more apparent throughout this conversation is that he really knows learning. He knows how to learn. He knows how to retain knowledge and he knows how to use it to keep growing and learning and challenging himself to do better. Ameer is the Co-founder of Blockgeeks, an online learning hub for developers and non-developers wanting to acquire and update their skills and knowledge around crypto and blockchain technology. He also hosts a popular YouTube channel with around 170 thousand subscribers that features videos like one that starts like this: “ Honestly, straight up, I think motivation is the biggest scam ever!” If that doesn’t get you to want to watch more, I don’t know what will? He talks to me about why staying up to date with emerging technology, like blockchain, is so important. “If you want to stay relevant. And if you want to stay ahead of the curve. And if you want to both benefit you as an individual, whether you’re an entrepreneur or just a citizen, you want to be absorbing and you want to be educating yourself on the latest trending technologies. And blockchain is one of these latest trending technologies, it will influence every aspect of society as we know it. Pandora’s box has been opened. “ Ameer explains that we’re still in the early days of where blockchain technology is headed. He says it’s early because what’s being build is way more complicated than the original internet. “You’re dealing with all different realms of society here, you’re dealing with money, you’re dealing with economics, you’re dealing with psychology, you’re dealing with governance, you’re dealing with privacy, you’re dealing with governments. You’re dealing with everybody. You’re literally trying to recreate a physical manifestation of a society online.” One of the most important points he makes is about what most people don’t really understand or appreciate yet about Bitcoin. Ameer puts it really simply. “I still think the most underrated thing that people really don’t appreciate is the fact it’s devoid of the Fed. The Federal Reserve doesn’t control it you can’t print on demand there is a there’s a finite number which is 21 million Bitcoins and that’s it. End of story.” What does that mean to have a limited supply? It means that as the demand increases, the value increases. That’s not the case with the USD for example that is no longer backed by gold. When demand goes up, the amount of money goes up. So what does that mean for its value? It goes down. “What Bitcoin represents… It’s not simply a software… it’s a complete new paradigm.” Ameer Rosic on YouTube https://www.youtube.com/user/AmeerRosic Blockgeeks https://blockgeeks.com Best moments: Digital cash and trusting the math 16:42 Most underrated thing about Bitcoin 21:34 The greatest invention homo sapiens created 11:40 Building the New Internet 41:31 Becoming our own Swiss bank 45:50 Note: To anyone who is language sensitive, please note that explicit language is used in this episode.
https://medium.com/speaking-of-crypto/never-stop-learning-in-blockchain-in-life-with-ameer-rosic-2c1b7cb6ef86
['Shannon Grinnell']
2019-02-13 16:18:49.378000+00:00
['Blockchain', 'Blockchain Technology', 'Ethereum', 'Developer', 'Bitcoin']
1,039
Grokstyle wins LDV Vision Summit 2016 Entrepreneurial Computer Vision Challenge
(Originally posted on May 30, 2016 on the LDV Capital blog here. Update April 4, 2017: GrokStyle announced raising $2M. Our annual LDV Vision Summit is May 24–25, 2017 in NYC, early bird tickets on sale now!) Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder ©Robert Wright/LDV Vision Summit Our annual LDV Vision Summit has two competitions. Finalists receive a chance to present their wisdom in front of hundreds of top industry executives, venture capitalists, top industry executives and companies recruiting. Winning competitor also wins $5,000 Amazon AWS credits. 1. Startup competition for promising visual technology companies with less than $1.5M in funding? 2. Entrepreneurial Computer Vision Challenge (ECVC) for any Computer Vision and Machine Learning students, professors, experts or enthusiasts working on a unique solution to empower businesses and humanity. Competitions are open to anyone working in our visual technology sector such as: empowering photography, videography, medical imaging, analytics, robotics, satellite imaging, computer vision, machine learning, artificial intelligence, augmented reality, virtual reality, autonomous cars, media and entertainment, gesture recognition, search, advertising, cameras, e-commerce, visual sensors, sentiment analysis, and much more. The Entrepreneurial Computer Vision Challenge provides contestants the opportunity to showcase the technology piece of a potential startup company without requiring a full business plan. It provides a unique opportunity for students, engineers, researchers, professors and/or hackers to test the waters of entrepreneurism in front of a panel of judges including top industry venture capitalists, entrepreneurs, journalists, media executives and companies recruiting. In the 2014 and 2015 Summits the ECVC was organized into predefined challenge areas (e.g., “estimate the price of a home or property,” “estimate how often a photo will be re-shared”) plus a “wildcard” category. Initially we proceeded in the same way for the 2016 ECVC, but we found that the most exciting entries were overwhelmingly the wildcards, so we decided to go all-in on that category. Attendees at this year’s summit bore witness to the outstanding lineup of finalists, including GrokStyle (visual understanding for interior design) from Cornell, MD.ai(intelligent radiology diagnostics) from Weill Cornell, DeepALE (semantic image segmentation) from Oxford University and Vision+Love (automated kinship prediction) from Carnegie Mellon. Congratulations to our 2016 LDV Vision Summit Entrepreneurial Computer Vision Challenge Winner: Grokstyle, Sean Bell, CEO & Co-Founder ©Robert Wright/LDV Vision Summit What is GrokStyle? GrokStyle, co-founded by Cornell researchers Sean Bell and Kavita Bala, is developing state-of-the-art visual search. Given any photo, we want to tell you what products are in it, and where you can buy them. We want to help customers and retailers connect with designers, by searching for how others have used and combined furniture and decor products. The world is full of beautiful design — we want to help you find it. As a PhD Candidate — what were your goals for attending our LDV Vision Summit? Did you attain them? My goals were to understand the startup space for computer vision, to connect with potential collaborators, find companies interested in building on our technology, and generally get our name out there so we can have a running start. The event definitely far exceeded our expectations and we attained all of our goals. Why did you apply to our LDV Vision Summit ECVC competition? Did it meet or beat your expectations and why? Serge Belongie recommended that we apply, and saw the value that the summit would have for us. We were excited, but certainly did not expect the amount of positive feedback, support, and connections that we made. My pocket is overflowing with business cards, and I’m excited to continue these conversations as we turn our technology into a company. Why should other computer vision, machine learning, and artificial intelligence researchers attend next year? I think that all CV/ML/AI researchers should attend events like the LDV Vision Summit. The talks here are interesting and varied, and it is inspiring to see how algorithms and computer vision research are having a real impact in the world. You don’t get that at academic conferences like CVPR. We try to have an exciting cross section of judges from computer vision experts, entrepreneurs, investors and journalists. Asking a question is Barin Nahvi Rovzar, Hearst, Exec. Dir., R&D & Strategy. Judges included: Serge Belongie (Prof., Cornell Tech, Computer Vision), Howard Morgan (First Round, Partner & Co-Founder), Gaile Gordon (Enlighted, Sr. Director, Technology), Jan Erik Solem (Mapillary, CEO), Larry Zitnick (Facebook, AI Research, Research Lead), Ramesh Jain (U. California, Irvine, Prof., Co-Founder Krumbs), Evan Nisselson — LDV Capital, Partner), Nikhil Rasiwasia (Principal Scientist, Snapdeal), Beth Ferreira (WME Venture Partners, Managing Partner), Stacey Svetlichnaya (Flickr, Software Engineer, Vision & Machine Learning), Adriana Kovashka (U. of Pittsburgh, Assist. Professor Dept. Computer Science) ©Robert Wright/LDV Vision Summit What was the most valuable part of your LDV Vision Summit experience aside from winning the competition? The most valuable part of the summit was connecting with three different companies potentially interested in building on our technology, and with four different potential investors/advisors. Last year, a key potential collaborator had presented at LDV Vision Summit, looking for computer vision researchers to solve challenging problems in visual search, interior design, and recognition. This year we were able to connect and say “we solved it!” Sean Bell, CEO & Co-Founder of Grokstyle ©Robert Wright/LDV Vision Summit Do you have any advice for other researchers & PhD candidates that are thinking about evolving their research into a startup business? My advice would be to keep potential commercial applications in mind, early on in the project, so that what you end up with at the end is easier to take out of the lab and sell to the world. For me, one of the most challenging aspects of research is deciding which problems are solvable and which are worth solving — if you are interested in startups, this is even more important. There is the extra step of understanding who cares and who wants to use it. What was the timeline for you to take your idea for your research to evolving it into a startup plan? We presented a research paper at SIGGRAPH 2015 about our ideas from last year. It has taken us a year to flesh out the work, develop it from a research prototype to a product prototype. But there is still a lot to do. I am graduating in a few months, and Prof. Kavita Bala is joining full time on sabbatical. We plan to hit the ground running this summer with our engineer Kathleen Tuite, and two interns we are taking on. As technologists, we are looking to partner with business people to take the lead on evaluating which markets and customers can benefit the most from our technology. Starting in the fall, we plan on fundraising to help scale up our technical infrastructure.
https://medium.com/ldv-capital/grokstyle-wins-ldv-vision-summit-2016-entrepreneurial-computer-vision-challenge-80a06275f6b5
['Ldv Capital']
2017-04-04 22:11:38.945000+00:00
['Visual Technology', 'AI', 'Artificial Intelligence', 'Computer Vision', 'Machine Learning']
1,040
What’s Happening in South Africa
During the Plug and Play Summer Summit in Silicon Valley, we had a special guest from South Africa, Rhenier de Beer, Head of Innovation at Motus Group. Motus is one of the biggest automotive groups in South Africa with businesses in Import and Distribution, Retail and Rental, Motor-Related Financial Services, and Aftermarket Parts. After the Summit, Karen Airola and I from Plug and Play sat down with Rhenier to discuss what’s happening on the other side of the world and why South African corporations are flying over 15 hours to Silicon Valley. Here are some highlights and thoughts of our conversations. 1. Digitalization is key The country can easily be updated by using all things digital, ranging from e-payment and e-commerce to digital claim processing in insurance. The successful IPO of Jumia, an e-commerce startup founded in 2012, validates how digitalization can go a long way in South Africa. More and more South Africa corporates are coming to the Silicon Valley to find digitalization models or solutions. 2. Transportation is a constant headache. Less than 5% of the people in South Africa own a car. Uber is an expensive option. Most lower working class people go with the Minibus, which is a vehicle you literally need to jump on at a chaotic crossroad and jump out in two seconds when the car stops for your destination. This is what a Minibus looks like in South Africa Minibuses like this don’t follow traffic rules. This does not mean simply driving through red lights, it means going in the opposite way on a freeway! With wild violations of traffic rules, car crashes are taking place at a rate that is out of our imagination. That leaves lots of room for financial services companies to create more technology enablers to record and prevent car crashes. For example, Rhenier loves Owl Camera, a Silicon Valley startup that provides real-time video protection for people, cars & trucks. Any technology solution like Owl Camera that can provide value-added products for the local South Africa market will have strong potential to thrive. 3. Very little local innovation Big corporations are doing innovation internally with R&D. Economic conditions are uncertain after the country’s political election, and people are afraid to invest money to build something new and unknown. Limited mentorship network is another hurdle for local startup innovation because people have very little clue about how to build a game-changing tech company.
https://medium.com/@siqi_73141/whats-happening-in-south-africa-e34468526365
['Siqi Lin']
2019-06-17 20:47:45.489000+00:00
['Innovation', 'Silicon Valley', 'Startups', 'Technology', 'South Africa']
1,041
OnePlus 9 Pro Tipped To Come With Official IP68 Rating; OnePlus 9
OnePlus 9 arrangement has been all the rage for a few days. As we are drawing nearer to the dispatch, various breaks, bits of gossip are coming out. Presently, insider Max Jambor has shared (through Voice) that the OnePlus 9 Pro will have just authority IP rating for water and residue opposition, while the other two models of the arrangement will skip it. It very well may be conceivable that the main Pro model will accompany an official IP rating. As the OnePlus 8 Pro just accompanies an official IP rating from the 8 arrangement. Nonetheless, the organization has not uncovered anything yet with respect to this. Aside from this, the insider has not referenced anything about the arrangement. In the interim, different reports have alluded to the normal contributions from the arrangement. Visit this website for complete details 👇 https://techmastertech1.blogspot.com/2020/12/OnePlus%209%20Pro%20Tipped%20To%20Come%20With%20Official%20IP68%20Rating%20OnePlus%209.html
https://medium.com/@ashirwadshri/oneplus-9-pro-tipped-to-come-with-official-ip68-rating-oneplus-9-17962a5d03f9
['Ashirwad Shrivastava']
2020-12-09 22:32:09.128000+00:00
['Opportunity', 'Mobile', 'Mobile Apps', 'Technology', 'Technical Analysis']
1,042
Where We Are Investing Now: India
Photo by Hardik Joshi on Unsplash Our funds are always on the lookout for the best entrepreneurs leading companies leveraging science to offer innovative worldwide solutions within Health and Happiness. In our constant quest to that end, about a year ago our attention was drawn toward the Indian startup ecosystem — India has witnessed the launch of more than 55,000 startups to date with more than 3,200 startups raising $63 billion in funding in the last five and half years alone (Inc 42). Home to 34 current unicorns and 52 companies with a potential to become unicorns by 2022, as the world’s second-largest startup ecosystem, India is poised for disruption. Even in the face of global economic uncertainty, 2019 was the second-most active year globally for venture capital investments in dollar value worldwide. It was a milestone year for the Indian VC industry, too, with $10 billion in capital deployed, the highest ever and about 55% higher than 2018. Additionally, India witnessed a 30% increase in deal volume over 2018 as well as larger average deal sizes across all stages (Bain and Company). Before the pandemic hit, members of our team were on the ground in India and met many remarkable stakeholders in the ecosystem, including accelerators, co-investors, and entrepreneurs. The drive, humility, ambition, and belief exemplified by everyone we encountered was astounding. So astounding that in the short four days we were there (cut short by the lockdown) we made the connections that lead to two investments. The first company we invested in is Oga Fit. Founded in 2017 by 26-year-old CEO Ashish Rawat, Oga Fit is a one-of-its-kind responsive platform that offers fully interactive live and on-demand workouts. Oga’s proprietary motion comparison technology tracks and compares 17 joints of the human body to generate real-time feedback on user movements (yoga asanas, dance steps, fitness exercises, etc). You can even sync up to work out with your friends in real-time! The brand will use the funding for content creation, product development, and marketing. Have a read about their recent raise or listen to the Managing Partner Mike Edelhart’s interview with Ashish. The second company is Phable. Phable is a chronic disease management platform providing physicians in India real-time feedback on their patients’ symptoms and behaviors in more than nine conditions, such as Hypertension and Diabetes Mellitus, using 21 integrated at-home devices. This is just the beginning for them — In only five months, Sumit Sinha CEO has raised his series A at 4.5X the value, demonstrating the huge potential for growth in an underserved market. Learn more about Phable’s rapid rise to success in Mike’s interview with Sumit. Our funds focus on the science and technology of health and happiness, and continue to look ever deeper into constantly evolving related areas ranging from NeuroScience, Mental Health, Fitness, Microbiome, Genetics, Digital Health, and Market Based Education, to Next-Generation Food and Drink and Consumer Products with an Emotional Pull. We see these areas excelling in the Indian market, with its ever-growing consumer base and increasingly sophisticated technological and investment infrastructure. The Indian market alone is big enough to produce unicorns that address the subcontinent’s huge unmet needs. We are excited to continue to support and invest in Indian startups in the coming year (and hopefully be back there on the ground once we all come out of this pandemic). By Investment Partner Neha Tanna Related Posts Covid and the Winter of our Discontent Where We Are Investing Now: Digital Health Where We Are Investing Now (and Forever): Delightful Moments
https://medium.com/@nicole-37964/where-we-are-investing-now-india-44e20c934910
['Joyance Partners']
2021-02-02 20:58:23.098000+00:00
['Health Technology', 'VC', 'Indian Startup Ecosystem', 'Indian Startups', 'Happiness Science']
1,043
Daily Bit #173: Learning the Ropes
Word on the Street Learning the Ropes Happy Friday Folks! When I asked yesterday if everyone was still having a good time, I meant it. Trading in these markets is akin to braving the high seas in a dinghy. There’s a strong chance of getting knocked around if you float in place for too long. In light of the recent volatility, I figured that now is an appropriate time to share two questions from an interview that we conducted with Niffler.co. They’re 9 days out from launching their platform, which essentially lets users trade the crypto markets with monopoly money. In other words — risk free. Now, there are pros and cons to trading with zero risk And it has to do with having skin in the game. Folks are less inclined to take things seriously without a risk of getting bit on the butt should things go south. The practice is brought up several times in Market Wizards. I highly recommend the book to anyone interested in learning how some characters made it big in the traditional trading world. There’s a mild difference between traditional markets and crypto: volatility is a regular son of a nutcracker. Plus, each wave of newcomers typically goes in blind and has a high chance of losing a lot of money. New traders can also land on the other side of the spectrum, though that depends upon the market cycle (and luck). More importantly, growing an equity stack isn’t impossible. The real pain is preserving it. People that’ve been around the block more than twice or thrice generally apply better risk management strategies than newcomers, who are more inclined to treat trading like a hobby (this is no bueno FYI). This turned into a bit of a rant… but hey it’s Friyay. There are several free resources in today’s other reads that I believe are bomb.com as far as getting your feet wet goes. I think that Niffler is worth looking into if you’re debating an entry into the markets. Whatever floats your (raggedy) boat. What are some of the pain points that Niffler.co is looking to solve for cryptocurrency enthusiasts? In terms of newbies, we would say the number one pain point we solve is removing the fear of not understanding the fundamentals of trading and losing their real life or actual money or capital by making poor decisions on a real world exchange. When it comes to more experienced enthusiasts or traders, we wanted to give them a home where they could earn money, grow their follower base, their brands by sharing their experience, knowledge and trading actions to a group of people sorely looking for it. We noticed after countless conversations with experienced traders who may currently use Patreon or Telegram, or perhaps even their own website, the experience was first and foremost theoretical only and not hands on, we really wanted to change that with our simulated exchange. With Niffler.co these experienced traders, influencers have a home where they can now share their live trading actions, TA’s, YouTube videos, links and so much more within their public or closed group feed and earn monthly ongoing pledges for helping newbies learn. We have built upon what Patreon has done for artists, musicians etc and really focused on that experience being more in line with cryptocurrencies and crypto trading. How does Niffler.co’s platform help prepare investors to put “skin in the game”? Do you think there are any downsides to paper trading? Great question….Niffler.co is a simulated crypto-trading platform designed to help people learn how to trade cryptocurrencies without the risk. We achieve this by giving our registered users $100k in play money to trade with and once they’ve proven themselves through something we call “Proof of Experience” they can achieve “trader” status”. At this point, they can also start earning money helping teach others. We like to think of it as Coinbase meets Patreon…without the risk! We don’t have anything against paper trading as it works for some, and not for others, however, we wanted to approach blockchain, crypto and cryptocurrency trading education from a tech perspective by giving our users a robust hands on experience through a real life and real time simulated crypto exchange. We believe there is simply no safer and easier way to learn than rewards based gamification that is hands on and real time.
https://medium.com/the-daily-bit/daily-bit-173-learning-the-ropes-31437eb1f098
['Daily Bit']
2018-09-07 18:17:26.708000+00:00
['Cryptocurrency', 'Crypto', 'Trading', 'Bitcoin', 'Blockchain Technology']
1,044
Why Bitcoin’s base layer can’t be scaled
As the Bitcoin community argued throughout much of 2017 about how to scale a network hit by high fees, a group of miners and developers decided to take matters into their own hands by forking BTC and increasing the block size of Bitcoin, creating Bitcoin Cash (BCH). Why would increasing block size be the right way to scale the Bitcoin system? When more miners join the BTC network, one might think the additional processing power would increase transaction speed — but this isn’t so, as additional miners only increase the network’s security. In fact, BTC’s transaction processing speed is directly a function of the system’s settings for 1) block size and 2) block creation rate, both of which are community determined. In the sections below, we’ll explore why transaction speed can’t be scaled using these levers without increasing vulnerability to double-spend attacks (although there are other concerns, like the increased centralization risks and node requirements posed by larger block sizes). Along the way we’ll try to cover a few relevant Bitcoin basics, but there should hopefully be many other great sources, including the white paper, which have that covered. One of Bitcoin’s core security assumptions is limited network delays In Bitcoin’s white paper, a few fundamental assumptions ensure the network’s security: Users each keep a copy of the “blockchain”, and updates are handled for “honest” users by always updating to the “longest chain” they are aware Miners solve “proof-of-work” puzzles to add blocks onto existing chains, and block additions are handled for honest miners by always adding to the “longest chain” they are aware of When there are limited network delays and honest participants control >50% of the network’s total computational power, the longest chain will be controlled by honest participants and will (with very high probability) grow faster than attackers’ alternate chains In real-world conditions when there are at least some network delays, we’ll discuss how increasingly large block sizes and faster block creation rates lead to more “forking”, which decrease the security of the network. Security (in this context) means preventing double spend attacks When security is discussed in the context of Bitcoin, we are referring to the ability to prevent double spend attacks. There are two stages to getting a transaction completed on the Bitcoin network — A) User A needs to access their Bitcoin wallet to declare they are sending money to User B, and B) the network acknowledges this transaction and adds it to the permanent record on the longest chain. While security for Step A is really about wallet and wallet password security, security for Step B is about making sure previously “valid” transactions aren’t rolled back in “double spend” attacks. What is a double spend attack? It involves an attacker first issuing a transaction on the current longest chain, then using their computing power to create an alternate longer chain that does not include that original transaction. Note that for Bitcoin, the longest chain is always the “truth” — honest users pull from it to update their own records, and honest miners add to the longest chain when they find new blocks. Let’s say Bob sends Jim 1 BTC for a pizza. After this transaction is confirmed on the longest currently available chain in block 10, Jim gives Bob his Pizza. However, if Bob wants to cheat his system, he can start from block 9 and mine a new block 10 and block 11 which not only doesn’t include his payment to Jim, but uses the same 1 BTC in another payment to pay for chicken wings. Since Bob has created a new longest chain that excludes his payment to Jim, Bob has now spent the same 1 BTC twice (“double spend”) for BOTH a pizza and chicken wings, while Jim has been cheated out of his payment money and is missing a pizza. Bitcoin’s block creation rate and block size can be changed easily… When we discuss Bitcoin scalability, it’s important to note that both the “block” creation rate, which determines the confirmation speed of transactions, and the individual block size are determined by the network. Notably adding additional computing power will make the network more secure against attackers, but does not directly result in more transactions being processed. Since (# of transactions processed / second) = (# of blocks created / second) * (transactions per block), Bitcoin’s scalability is actually something the community can actually set if agreed upon. So why can’t Bitcoin simply scale by increasing block creation rate and block size (e.g. as Bitcoin Cash attempted)? … but only at the expense of security (decreasing the security threshold) This is because increasing either of these variables decreases the network’s security threshold. Security threshold = % of the network’s processing power needed to run a double spend attack Bitcoin’s core security rests on the idea that if “honest” nodes control >50% of the computing power, they will be able to mine blocks and grow their longest chain faster than any attacker can potentially create an attacking chain. Let’s run a few basic calculations to see how this might work. H = growth rate of main chain (in # of blocks/second), controlled by honest participants D = growth rate of attacker’s chain Z = amount of computing power, on average, needed to successfully create a block If H > D, then the honest longest chain will grow fast enough to be secure from double spends. Under good network conditions, if the total computing power of the network is C, the fraction of computing power controlled by honest nodes is h, then say (for simplicity) H = C*h/Z. If the fraction of computing power controlled by dishonest attackers is d, then D = C*d/Z. With ideal network conditions, an attack will succeed if D>H, or d > h. Since all nodes are either honest or dishonest, d+h = 1, and the security threshold is 50%. Higher network delays means the honest longest chain grows more slowly Previously, we noted that Bitcoin’s security in it’s original white paper assumes a “synchronous” network, where participants learn of new blocks and the current longest chain instantaneously. Say there is a blockchain with 5 blocks (Blocks = A, B, C, D, E) — think about two scenarios: A) Network/internet is fast - miners learn of new blocks and new longest chains quickly, and quickly switch to working on top of new longest chains. Here, a newly discovered block would be F, and would follow right after E. - miners learn of new blocks and new longest chains quickly, and quickly switch to working on top of new longest chains. Here, a newly discovered block would be F, and would follow right after E. B) Network/internet is slow - miners don’t learn of new blocks quickly. Say Ted’s current blockchain only has 3 blocks, A to C. When he discovers a new block, call it D*, he adds it to his chain (Blocks = A, B, C, D*), only to realize the longest chain is now A to E. Their newly discovered “block” is now wasted and won’t be on the longest chain. Additionally, (A, B, C, D*) is now a “fork” of (A, B, C, D, E) — it’s shorter than the longest chain, and has a different block in an earlier position. Effectively when honest computing power is wasted creating blocks on top of old chains that aren’t the longest, the longest chain by honest nodes grows more slowly as computing power is wasted on unused blocks. So instead of H = C*h/Z, it might be H = ½ * (C*h/z) if half of the blocks are wasted. So now even if honest nodes own 50% of the computing power, the security threshold is only For an attacker to be able to win: D > H → C*d/Z > C*h/Z → d > ½*h → d = 1/3 Security threshold = 34% Therefore when there are network delays, h > d alone doesn’t guarantee security. Indeed, an attacker with only 34% of computing power could succeed in an attack. Why doesn’t forking affect dishonest attackers? We usually assume dishonest attackers can communicate more easily among their own nodes (e.g. through direct network connections), and always focus their efforts on growing a single chain, therefore wasting fewer blocks. For more details on this idea, see source DW13 here: https://eprint.iacr.org/2016/454.pdf “Indeed, Decker and Wattenhofer [DW13] already experimentally observed that increasing the networks delays in Nakamoto’s protocol leads to increased forks, and they noted (through heuristic calculations) that an attacker could use these delays to violate consistency with an attack that requires less than 50% of the mining power.” How do block size and block creation rate affect “forking”? Increasing the block size makes it slower for new blocks to be communicated throughout the network. This is similar to the effect of slowing down network speeds, in that new blocks and longest chains take longer to be propagated across the network, increasing the frequency of forks. The impact of “block creation rate” is also similar — the more quickly new blocks are being created, the more often the longest chain is growing, and the more often nodes will find themselves wasting “blocks” by working on top of outdated, shorter chains. Final thoughts While adding additional mining power to the network makes Bitcoin more secure, couldn’t you then increase block size or creation rate to increase transactions processed, while still keeping security constant? This might work in theory, but probably isn’t the long-term solution for Bitcoin, since Bitcoin would need to scale by orders of magnitude more before it became competitive with other centralized solutions (like Visa), assuming the goal is to become a method for everyday payments. In the meantime, there’s always the Lightning network for users to hold out hope for.
https://medium.com/datadriveninvestor/why-bitcoins-base-layer-can-t-be-scaled-ecaa79f6f3cb
[]
2019-02-09 04:11:50.618000+00:00
['Cryptocurrency', 'Crypto', 'Bitcoin', 'Technology']
1,045
Catching poachers with machine learning
Catching poachers with machine learning Building an ML system to detect poachers in nature preserves Full disclosure: I am a maintainer of an open source ML platform, Cortex, designed to build projects like the ones discussed below. On any given day, it is estimated that nearly 100 African elephants are killed by poachers. In total, 1,000s of animals are poached everyday worldwide. A large amount of this poaching occurs within nature preserves, the last (theoretically) safe place for many endangered species. For the rangers tasked with protecting these animals, stopping poachers is a fight in which they are constantly outnumbered. The over $7 billion dollar illegal industry attracts a seemingly never-ending stream of poachers. One nonprofit, Wildlife Protection Solutions (WPS), has recently begun fighting poachers with machine learning—and it’s working. Using a network of motion detectors, cameras, and a trained model, WPS is identifying more poachers at a faster pace than ever before, and introducing a new advantage in the fight against poaching. How do you monitor 1,000,000 hectares of wildlife? One of the hardest parts of preventing poaching is also one of the simplest: Nature preserves are really big. Monitoring a 1,000,000 hectare area, including dense forests, cliffs, and other natural obstacles, at all hours is a difficult task for small crews of rangers—even with remote monitoring. WPS and related groups have deployed motion sensor cameras throughout nature preserves for years now. The cameras work by capturing images of large, moving objects and sending them in realtime to human monitors, who analyze them for poaching activity. If the humans on the other end see poaching activity, they send an alert to a network of responders. But while this remote monitoring is an improvement, it still presents some challenges. Analyzing the footage from many cameras all at once—and doing it fast enough to catch poachers in the act—requires a larger staff of reviewers than the average nature preserve has. Even with efforts to automatically filter for images of poachers, WPS estimates that the system only detected 40% of recorded poachers. Detecting poachers with machine learning To increase their detection rate, WPS introduced machine learning into their monitoring system. The monitoring system, before introducing machine learning, could be diagramed like this: The field cameras captured images, delivered them to a monitoring center, and if the humans running the center saw evidence of poaching activity, they’d push notifications to the relevant people. Their goal in introducing machine learning was to insert a trained model, as an API, into the threat assessment stage. All incoming images would automatically be filtered for poaching activity, with only the positives being passed onto reviewers. Working with HighLighter, an ML development platform, WPS was able to train an object detection model that recognized specific animals, as well as humans, vehicles, and other potential signs of poaching: After deploying the model, they were able to plug it into their existing setup without rearchitecting their entire monitoring system. In the first week of testing, they caught two poachers. The team estimates that the system is twice as effective as before, boasting has an 80% detection rate, and is constantly improving with more data. Since the initial test’s success, WPS has rolled the model out across nature preserves on three continents, serving over 1 million predictions in its first month alone. How can a nonprofit afford machine learning? One of the many exciting aspects of this story is that this application of machine learning isn’t just feasible—it’s accessible. Small teams and solo engineers have been deploying vanilla pretrained models for a while, but designing, training, and deploying a model for a task like this has historically been the domain of larger tech companies. But for WPS, off the shelf solutions like OpenCV didn’t work. They needed to train and deploy their own model. Years ago, the fact that they were a small nonprofit would have precluded them from doing this, but not now. Model development platforms and open source models have improved to the point to where now, even small teams can train models. Engineers have spent years working on open source infrastructure platforms like Cortex, so that any engineer can take a model and turn it into a poacher detector, or a video game, or a cancer screener. People have been talking about democratizing machine learning for a long time, but this project serves as evidence that now, it is finally happening.
https://towardsdatascience.com/catching-poachers-with-machine-learning-118eec41d5b9
['Caleb Kaiser']
2020-06-12 19:47:56.717000+00:00
['Technology', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Programming']
1,046
How We Can Prove Nature Codes Reality
Everything depends on light. Zero and one. Light is the constant in the reality we live. At present. We know, if the light goes out, we’re dead. This means, we can prove that nature codes reality. If you live in Florida, and you move from south to north, on the eastern coast, you will notice, different kinds of lizards. The lizards are all the same. Except as you move north, on the coast, they change. This is basic evolution. Co-evolution, to be exact. The lizards change their shapes, and colors, depending on their environment. Proximity to the sun. Water. And water’s relationship to the sun. Hydrogen, when burned, produces water. Explaining anger and depression, over change. Water moves as hydrogen moves, meaning, everything moves, according to its relationship to light. This throws everything off, as far as timing, where southern Florida is on one clock, northern Florida, and, also, east and west, is on a slightly different clock. We know about the subtle change in timing, as it comes to, and from light. We call this ‘time zones.’ Explaining the relativity of light (and time). Meaning no two anything’s share a time zone. This you get to with your abstract (version of) intelligence. Meaning, this is something you automatically know, without ‘knowing it.’ So, the idea light is the (only) constant is not completely correct. Light is a line. A line is the diameter, and circumference of a circle. Meaning, we have two constants. Zero and one. Energy and light. Time and space. Any X and-or Y. Any zero and-or one. Zero is dependent on one, and vice versa (making everything dependent on everything else). This explains why technology is now taking over everything, and will determine, eventually, if we even have an ‘earth.’ Light. Sun. Conservation of the circle is the core dynamic in nature.
https://medium.com/the-circular-theory/how-we-can-prove-nature-codes-reality-1eab4a65f618
['Ilexa Yardley']
2017-08-03 12:06:44.941000+00:00
['Circular Theory', 'Universal Relativity', 'Universal Circularity', 'Technology', 'Science']
1,047
Improved Shipping with The Proof of Trust
Midway through 2021, and the entire world is still adjusting to living with COVID-19. As if things weren’t challenging enough, Brexit has added further disruption to the British and European political landscape, with many difficult discussions about trade deals and so on. Shipping companies have had to navigate tricky waters, literally and metaphorically, to bring goods across borders. The added red tape from Brexit and extra safety measures to control the spread of coronavirus have severely impacted the expected shipping time across most European nations. According to sources, container freight rates from Asia to the UK have increased almost fourfold since November. This is due to a global rebound in demand for consumer goods and materials and congestion across UK ports, where many empty containers have been left stranded. As well as the challenges mentioned above, European customs officials have long been working hard to reduce illegal trade and fraud entering the bloc. Organised criminal networks target perceived weaknesses in customs procedures as access points to fraudulently bring goods into the EU. According to some EU reports, imports of counterfeit and pirated products amounted to around €121 billion (6.8% of EU imports) in 2018. Of course, with new legislation, there is always a settling in period, and the safety measures to stop the spread of COVID are simply unavoidable. However, there are things we can do. Enhanced customs technology and communication channels would improve security and reduce fraudulent imports. This is precisely what we at The Proof of Trust want to achieve — enable safer and faster cross-border trade while simultaneously decreasing delays. The Proof of Trust technology integrates into existing systems and acts as the backbone of a decentralised customs clearance system. Shipment issues, such as missing licenses or identification queries, would be immediately checked by customs officials or other vetted individuals. If the cargo is identified as fraudulent, The Proof of Trust will pass information onto the relevant authorities to intercept illegal goods. A recent cooperation agreement has been signed between the Secretary-General of the World Customs Organization (WCO) and the Director-General of the European Anti-Fraud Office (OLAF), which focuses on EU wide information sharing. Our technology aligns perfectly with this agreement. Customs departments connected via The Proof of Trust could share information to improve fraud identification — saving taxpayers money and speeding up worldwide shipments.
https://medium.com/@theproofoftrust/improved-shipping-with-the-proof-of-trust-5f571de0ce72
['The Proof Of Trust']
2021-07-30 12:46:32.295000+00:00
['Shipping', 'Government', 'Blockchain', 'Technology', 'Smart Contracts']
1,048
Journalism In Dark Times
Fifteen years ago, I published my first news piece in a print magazine. After that, I went on a long journey discovering and working in diverse fields including blogging, citizen journalism, campaigning, translating, producing and managing. Some roads were bumpy while I found myself in others, and these became a launchpad to some successful media initiatives. However, working in independent media in the Arab world has become increasingly more difficult, especially since the counter-revolutions began to gain strength in 2013. Counter-revolutions have had a profound effect on the media industry, both in the countries of the Arab Spring and across the wider Arab world. Security authorities have come to realize the power of the media and its impact on public opinion, as illustrated notably in 2011, when social media platforms were successfully used to mobilize people in protests, resulting in a political transition in a number of Arab countries. At the time, many television networks were prompted to change their policies and give more airtime to young voices. By 2013, things had completely changed. Pre-2011, social media platform users in the Arab world were mostly young people who belonged to what can be classified as a rising middle class. However, following the 2011 uprisings, the general Arab public increasingly signed up to those platforms and started following them closely. This led to a significant change in the nature of discussions on those platforms. The new users came from different age groups and backgrounds, and Facebook, among other platforms, ceased to be a safe space to hold political discussions or start human rights campaigns. Facebook debates turned into social confrontations that could land people in jail — something that has happened to many Egyptians who were simply expressing their views about current events in their country. Furthermore, many Egyptian journalists were arrested for doing their jobs, bringing Egypt up to the shameful third place in the world ranking of countries with journalists behind bars, after China and Turkey. Mahmoud Abou Zeid, known as Shawkan, was finally released after spending more than five years in prison on trumped-up charges — Amnesty International At the same time, the Egyptian State launched a crackdown on independent media outlets. Hundreds of websites have been blocked in Egypt and journalists have been demonized, portrayed as working for foreign entities and betraying their country. These actions have affected the personal security of all journalists. The Egyptian government also moved to establish a number of companies with ties to state security agencies and the intelligence service. These new companies then acquired many television networks and news websites, which led to identical news coverage on all of the outlets. The 2016 U.S. presidential election laid bare the role played by social media in making fake news go viral, which in turn prompted social media platforms to work on adapting their algorithms such that less news would be posted in news feeds, instead favoring a higher proportion of posts from friends and family. This greatly affected the independent media industry in Egypt, most of which had already fled from traditional news websites to social media networks in an attempt to reach the public. The Egyptian State does not allow for an independent media, and constantly seeks to hinder any funding for institutions supporting independent media by drafting legislation aimed at paralyzing civil society. Alternative methods like social media outlets are also facing a crisis, not to mention the numerous risks faced by everyone involved in media. How to solve this dilemma? This is what I am trying to answer in my journey as a fellow in the Tow-Knight Entrepreneurial Journalism program at the Craig Newmark Graduate School of Journalism at CUNY. The 2019 Tow-Knight Entrepreneurial Journalism Cohort. Not pictured: Emiliana Garcia. Photo by Skyler Reid Independent journalism in the Arab world has generally kept to traditional means of publishing its content, such as text-based news and multimedia. Independent initiatives have not sufficiently explored innovative ways of doing their critical jobs in these tough years. Al Jazeera’s AJ+ has greatly impacted the news industry both globally and in the Arab world. The digital consumer has become more interested in video than text-based content. However, a wide-scale investment in digital news has not happened yet. Chatbots, Telegram groups and Instagram accounts have provided new tools for publishing content. For example, Iran is a country where Telegram and Instagram are widely used, and Telegram was employed during the 2017–18 protest against the regime to circumvent governmental obstruction, enabling protesters to coordinate and to inform the world about events in the country. Similar ways of using new tools will give greater chances for independent media to reach wider audiences. The dependence of such initiatives on a small number of donor NGOs has, however, contributed to limiting the chances for discovering new tools in the media industry and in seeking out funding. I’m looking for solutions to this complex dilemma by aiming to create a new model of non-profit journalism based on grants and individual donations. This model would ideally be able to reach an audience of millions using new tools that can bypass governmental obstruction. These restrictions may have succeeded so far in disrupting journalism in the Arab world, but cannot obstruct journalism forever.
https://medium.com/journalism-innovation/journalism-in-dark-times-dacf8a0e3bd9
['Abdelrahman Mansour']
2019-03-20 16:48:50.893000+00:00
['Journalism', 'Innovation', 'Technology', 'Human Rights', 'Media']
1,049
New World Order for Dummies
Who is Misusing the Vacuum of Power? To answer this question, I would say the structure or system that created the balance of power in our world in the first place. A pyramidal system with an obscured elite at the top. Not all people in the top of that system are intrinsically bad, the system allows them to have become so corrupt as many have become. This system is not serving humanity anymore as a whole, so much is clear. This pyramid system might not feel old, but it is. It stems from our old feudal system where the king reigned in a hierarchical system. The French revolution brought hope to the people with equality in brotherhood and freedom, but it didn't kill the underlying, invisible pyramid structure of power. Throughout the last decades, the top of our most powerful pyramid, the money pyramid, has been ruled by an elite group of people, families, clans, sects, whatever we can name them. Their strategy always has been to divide and conquer, by financing both sides of many wars for example. The problem for the elite is that they’re losing power. This is the exact reason they are going for absolute power now, because technology, for the first time in history, potentially allows them to. They don’t realize that they are waking up humanity and hence they’re playing with fire. The other problem for them is that they could operate in the shade for hundreds of years. Now, their power structures and practices are getting very exposed. See the structure as an invisible mould of our society in the past, but now this form of invisible power is pushing slowly but surely through the veins of society. Like the chocolate letter, M (of Money) is made by pushing this letter M slowly up through liquid chocolate, surfacing it, and now the M is hardening and becomes visible for everybody. The great distribution of money, from the top of the pyramid into the wallets of all, would be when we all got some part of that M chocolate letter. Hmmm, yammie. We seem to be further then ever, although the earth is screaming for it. It will be the moment that huge groups of people start to wake up to the fact that the ‘Pyramid’ takes 90% of our produced labor and capital. This simply means that when the pyramid breaks down, we only have to work 10% to have the same as what we have know. Now let’s assume the plan towards absolute control is real, and these entities need to work more and more in full daylight? Well, lets make that part of the strategy. Show the world this dark intent though shamelessly publishing the agenda through Netflix series about pandemics, YouTube movies about agenda 201, Sponsored dark ads on Facebook and post the complete plan on the world economic forum website. At the same time, a censorship machine the size a nuclear bomb has been set in place to block, censor and black sheep any content and voices that question the agenda narrative. There are more conspiracy theorists these days than real journalists as it seems. it’s disgusting and really worrying, really. The Power of the intertwined 4 cartels punches the people in the face shamelessly from behind the curtains and while we try to defend free speech, free press, our constitutional democratic rights, another punch is placed, another agenda point is being played out. A psychological war. This brainwash is unleashed against the population, nothing more or nothing less. The masses don’t see this, they are too hypnotized and indoctrinated by a monster that operated in the shades for decades and now smells the chance for absolute power. I don’t blame any cognitive dissonance. The truth, in this case, is pretty painful to face and induces a feeling of core unsafety, since father state is abandoning us, leaving us as prey in the claws of a bunch of power-hungry hyena’s. Millions of people choose to hold on to the mast of the sinking titanic of our old world, instead of jumping aboard in disobedience. In order to grow, we must become aware of this pain, look through the illusion of trust we placed in our democracies, and start creating new foundations for new forms of living together from new values. Our biggest enemy for victory is our own fear. The beasts on the other side seem to feed themselves from it.
https://medium.com/spirit-of-crypto/new-world-order-for-dummies-5d379831ed17
['Lucien Lecarme']
2020-09-17 09:45:23.034000+00:00
['Humanity', 'Economy', 'Society', 'Technology', 'Cryptocurrency']
1,050
Understanding Git Terminologies : The Fun Way.😎
Git logo from the official git documentation website. There are hundreds of Git tutorials out there; each explaining various aspects of Git, and how to use them. Why then am I creating another one? The answer to this is that I simply want to explain Git in a fun and easy to relate manner. This post does not promise to cover everything there is to know about git, else we would end up reading for a really long time. Now that i have answered that question, let us dive into the wonderful world of Git. What is Git? Git is an amazing tool. Very amazing, I must say. When I first started with Git, whenever I searched for “What is git?” The first answer i saw was always along the line of “Git is a Version Control System…” Most times, I wanted a straightforward answer. “Yes, it is a Version Control System. I have seen it in the twenty other pages I checked, tell me what Git is already !” So if you are like me, and you want to go straight to the git terminologies, sorry to disappoint you, but we will have to start with Git being a Version Control System (VCS). Git is a Version Control System, the most commonly used version control system at the moment. Being a Version Control System means that it keeps track of all folders or files being monitored by it. Git does not stop at just tracking your files, it also keeps track of: What files were changed. What changes were made. Who made such changes. When the changes were made. When most people hear of Git, they think of software developers. This is not necessarily the case. Git can be used by anyone who wants to keep a track of their files as they work; can be a writer, a designer or someone who wants to track all the movies on their computer. Anyone at all. Git also makes collaboration very easy, it allows multiple persons to share and work together on the same set of files without conflicting each other’s work. Cool right? Definitely! To share and work with other people using Git, you will need access to a Git-based hosted service like Github, Bitbucket, and more. You can research on the one you would love to use. Git is a software that you can access through your computer’s command line(terminal), or through Gitbash. I use Gitbash. To get started with Git, you can read more about it in its documentation website which is linked below. 👇 Git — Installing Git Now that we know what Git is, let’s talk about the various terminologies used in Git. Some Git Terminologies We Should Know. Git init : This is the command used to initialize git in a folder. In your terminal or on git bash, you go into your desired directory, and then type “git init”. By doing this, you made git like a personal security officer for that folder. Whatever goes on in that folder will be monitored by git. If a letter of a word in a file is changed, git will create a record and definitely let you know when you ask it. Git status: This is your way of asking git for the events that have happened in the folder you asked it to monitor. When you type “git status”, git shows you every change that has occurred in that folder. If new files were added, or old ones were deleted, git keeps a record and displays it to you when asked. In my opinion, git status is probably the most used git command. I never get tired of using it. You should also make it a close pal, because we need to always know what’s happening in our files. Git clone: If you are developer who uses a Git based hosted service like Github, you may want to clone a remote repository (like an online folder from github), to do this, you type “git clone” in your terminal followed by the link to the repository you want to clone. What cloning does is just as its name suggests, it creates a copy of everything in the remote repository, and puts it in your local computer. Different locations, same content. Terrific! Git add: We talked about git being a super security officer for your folders right? but a security officer cannot track what never existed in the first place, hence we need to add all files and folders that we want tracked, to our git repository(folder whose contents are being tracked by Git). Git won’t track any folder that has not been added and committed to it through the git terminal. To do this, open your terminal (git bash or anyone you use), go to your desired directory using the “cd name of directory” command, then type “git status” to show you all folders that have been added or not added, after that you type “git add name of file”. There are different ways to use git add. If you want to add a specific file, you can type “git add name of file”, but if you want to add all the files available, simply type “git add -A” or “git add .” Using git add shows that the files added are ready to be committed. Note that after adding a file, if you make a change before committing the file, you will need to use git add again. Git commit: After adding your files to the Git repository, you need to commit it. This is like making a commitment with Git, telling Git the super security officer, to start tracking all activities and changes that happen to that file. By committing a file, the journey between your file or folder and Git has officially begun! When you use git commit, you have to enter a commit message along with it. There are two ways to use git commit, you can choose to type “git commit”, when you do that, your default text editor is opened for you to enter a commit message. The other way is “git commit -m “message” “. A commit message is basically a line or more of text explaining what changes were made in the files being committed. It is written like this: git commit -m “this is a commit message” A commit message has to be clear and explanatory, as much as some of us love to have fun while working, it is advised to keep the commit messages as clear as possible. Nothing like: git commit -m “i am tired” or git commit -m “i give up” Git, being nice and understanding, will accept those commit messages, but in a professional environment, let’s try to not use such messages. Git log: Think about your phone’s call log, yeah, that list of all calls you have either received, made, or missed for a period of time are all displayed there. That is the same thing Git does when you type git log. It displays a history of all your git commits in a project; starting from the most recent commit to the oldest one. Quite detailed yeah? Git keeps the best things. Git branch: When working with git, you can create as many branches as you want. Most people create branches to avoid unnecessary mistakes in their work. Let’s use Bally as an example. Bally is a writer, with two different versions of her book, Imran. One is free while the other is premium and can only be bought in some selected stores. Both books have very similar contents, but the premium one has some bonus chapters, and a special note from the author, Bally. When writing these books, Bally can create two different branches in git, and alternate between them as she adds and removes contents from them. One can be named “Imran free” and the other “Imran premium”. How will she create these branches? There are two ways to do it, the first way is to type “git branch name of branch” then “git checkout name of the branch”. Git branch name of branch, is a way of creating a new branch and giving it a name, after doing that, git checkout name of branch is a way of going into that branch which you just created, and to start working in it. The second way is more like a shortcut. Bally will type git checkout -b name of new branch. By doing this, she just created a new branch, and also moved into the branch with just a line of text. Now that she has two branches in her git repository, when she wants to work on any one, she will use the git checkout name of that branch to go into it. If she is in the Imran free branch, and wants to work on the premium book, she will simply type “git checkout Imran premium”, and git immediately switches branches to the Imran premium branch, and she can now work on the premium version of her book. Great! Git checkout: This is a way to switch from one branch to another. I already mentioned it when explaining the git branch. Using git checkout is quite easy, you simply type “git checkout name of branch name you want to switch to” and voila! You are in your desired branch. Git fetch: Git fetch is a git command used to download contents from a remote repository. To use git fetch, you type “git fetch name of remote branch”. Remember that a remote repository is the folder hosted on a Git based hosted service like github. So, when working with a team, you can use “git fetch” to download all the recent changes that your teammates have made, and see if you would like to merge them to your own work or not. Git merge: Git merge is another command that you will definitely use a lot when working with teams or collaborating. The git merge command takes a series of commits and makes them into one. Using git merge joins the branch you fetched (see git fetch above), to your current branch. It makes them become one branch. To use git merge you type “git merge name of branch you want to merge”. Note that git merge won’t work automatically if there is a place in both branches where different changes were made. If line twenty in branch A is “Hello world, I am Nike”, and line twenty in branch B is “Hello world, I am Akinsola”, Git will not know which one is supposed to be correct, so it won’t merge.You will need to correct those files manually. Git pull: Git pull is a combination of two other commands; “git fetch” and “git merge”. Git fetch can be seen as a safer version of git pull because you download those contents from the remote repository and then check to see if you want to merge it with your work. With git pull, you download the contents and automatically merge them with your own work with just a single command. Cool yeah? Right. To use git pull you type “git pull origin name of branch” or simply “git pull”,if you are working on a single branch. Note that your files won’t merge automatically if there is a conflict like I described in the git merge explanation. Git push: Hurray! You have worked tirelessly on your work(even if it is a single line change), now is the time to share it with other people. Git push is the command to send your work to the remote repository where others can see it. Most times, git push comes after the git pull command. This is because you need to make sure your local repository is updated, and the same with the remote repository before sending a new set of files to the remote repository. After adding your files, committing, and pulling from the remote repository, git push is often the command that comes next. To use git push, type “git push name of branch you want your work or changes pushed to”. And you are done! Congratulations. 🎉🎉🎉 You came this far, which means that you now know the meaning of different Git terminologies and how to use them in your Git terminal. That is really great, I must commend you. Now remember, that you can only become great at something if you constantly practice; so don’t stop here, use Git as much as possible, and enjoy the amazing world of Git. For more information on Git, you can check its official documentation page which is linked below. 👇 Git — Documentation
https://medium.com/swlh/understanding-git-terminologies-the-fun-way-76dd4759677a
['Hafsah Emekoma']
2020-06-16 19:39:57.343000+00:00
['Web Development', 'Github', 'Technology', 'Beginners Guide', 'Git']
1,051
Displaying images from a private cloud storage in Power BI report
The internet is flooded with guides showing how to use a cloud storage service to display images in a PBI report. Most of them mentioning how the URI should to be anonymously available, example. This is perfectly fine if you don’t have any sensitive data. For a company that prefers to keep their image files private, this approach falls short. Solution: For this solution I have chosen to go with Azure Blob Storage as my cloud storage provider. When using Power BI we are already in the Microsoft universe, so let us stay there for convenience’ sake. Prerequisites: Implementation(Using Azure PowerShell) First we would have to log in to the azure account with this command: Connect-AzAccount Next we want to create a context of where we want the shared access signature(SAS) to be. This is done by firstly retrieving the access key of the storage account. 1. Go to Azure Portal 2. Now go to Storage Accounts 3. Click specific storage account 4. Settings > Access keys 5. Copy, paste a key Replace <> with your account name and storage account key and run command. $ctx = New-AzStorageContext -StorageAccountName <storage-account> -StorageAccountKey <access-key> To create the SAS we need first to specify the context we just created inside the parameter -Context. Here I give read permission with ‘r’(highly recommended) as we want users ONLY to view the content. The lifetime for the SAS token can be set by -ExpiryTime parameter which has to be of type DateTime. I created one with n-number of days into the future. The name has to be specified to generate the token based on the container we want the user to access.(see docs for more information about the parameters). Run this command and replace the <> with your container name and expiration date number. New-AzStorageContainerSASToken -Name <container-name> -Permission r -ExpiryTime (Get-Date).adddays(<number>) -Context $ctx This should return a SAS Token that can be used as an extension of the storage resource URI. Storage resource URI with a SAS Token extension WARNING: Only share the Power BI report with people who should have read permission to the container. The token can easily be retrieved through the report itself.
https://medium.com/tech-simplified/displaying-images-from-a-private-cloud-storage-in-power-bi-report-93d3bf041458
['Jesper Enemark']
2020-12-20 20:51:00.511000+00:00
['Business Intelligence', 'Image Processing', 'Technology', 'Power Bi', 'Azure']
1,052
Hacking capitalism: A nomadic hackbase is trying to usher in techno-utopia
“Hacking is more than just ‘geeking out with computers. We see it as a determination to solve problems the non-typical way, ‘hacking through’ them,” says David, co-keeper of the hackbase Cyperhippietotalism (CHT). We’re sitting in Quinn’s pub in London, at the heart of Empire, discussing alternatives to soul-sucking nine to fives. “The other day, somebody asked me what a typical day looks like. I have no idea, do you have some sort of a routine where you wake up everyday and go to work?” he asks, much to my amusement. Set up in 2011 in the island of Lanzarote, Canary Islands, Cyberhippietotalism or CHT describes itself as a “tactical post-capitalism research project, building hackbases (live-in hackerspaces) [as] free, sustainable lifestyle infrastructure”. What it offers is an integrated space for work and co-living, aiming to create a blueprint for self-sufficiency using open technology. The goal is to reduce dependence on money and trade, effectively facilitating a lack of dependence on capitalistic modes of production and the routine that comes with it. David describes his project in quasi-utopian terms: a space surrounded by ocean, providing greener infrastructure and free time to pursue “creative technological and art projects” at minimum cost. “We are trying to survive and thrive in off-grid barren lands of Canarian deserts/mountains — working with architecture experimentation, new energy systems, water, communications, planning, as well as shopping, trying to grow food, working on our van, cooking, exploring. We document our processes, writing them down as strategies and tactics,” explains David. Map of Lanzarote showing hackbase location/ Totalism.org via CC “I wanted to establish an autonomous network of spaces where you wouldn’t necessarily need to own or rent a place in order to move seamlessly from one hackbase to another in this self-organised autonomous network. I saw it as a lifestyle — this was the kind of life I was already living and wanted to expand on.” The hackbase, a term David claims to have coined, draws from the Roommate Anti-Pattern of the classical hackerspace design with additional nomadic live-in infrastructure. He explains that while hackerspaces are “hobbyist” places one goes to during breaks from a job, the hackbase aims to reinvent the basic life & work infrastructure by eliminating the separation between the two. “It’s important that I have the free time to do my struggle, and that the struggle doesn’t get hampered by the necessity to work, to labour in a capitalist system of exchange.” There are currently 1317 active hackerspaces all over the globe, and 355 awaiting execution. CHT, however, was one of the earliest hackerspaces in Europe to provide live-in hackerspace infrastructure in an attempt to “deploy postcapitalism”. “Capitalism cannot work due to internal inconsistencies: both societal and ecological. In capitalism, the majority of the workers labour for their own subsistence; however, I believe advanced technology is ushering in the end of work. People who used to work in factories previously will now be redundant, causing job cuts and leaving them with no means to pay for basic food and shelter anymore,” he says.
https://medium.com/hacking-digital-britain/hacking-capitalism-a-nomadic-hackbase-is-trying-to-usher-in-techno-utopia-f5366dd79e6d
['Manisha Ganguly']
2017-07-27 17:43:02.214000+00:00
['Politics', 'Hacking', 'Technology', 'Capitalism', 'Hackerspaces']
1,053
Address These 5 Questions to Understand Python Logging
1. Why Doesn’t It Use the snake_case Naming Convention? As we all know, the PEP 8 specified how we should name things in Python. In essence, it’s the so-called “snake_case” naming convention. For instance, we have a function that sends a message. The Pythonic name should be send_message . However, some other languages use the camelCase for function names. In that case, the function should be named as sendMessage . Unlike other modules in the standard library, the logging module doesn’t use snake_case naming. Instead, it follows the camelCase naming convention. The reason is that this module was contributed by the author Vinay Sajip. According to the author, the development of this module was informed by Java’s logging package and Apache’s log4j, both of which use the camelCase naming conventions. The author continued with this convention in this module. The good thing is that although the module doesn’t follow the snake_case naming convention, it has internal consistency due to its use of the camelCase naming convention. So you’ll be less likely to be confused about various names in the module given its consistency. For interested users, you can refer to this discussion on Stack Overflow regarding this topic.
https://medium.com/better-programming/address-these-5-questions-to-understand-python-logging-e8a45718819
['Yong Cui']
2020-11-20 16:52:06.045000+00:00
['Technology', 'Artificial Intelligence', 'Software Development', 'Programming', 'Python']
1,054
Why You Should Not Switch to Dvorak
Why You Should Not Use It Non-standard keyboard layout Despite having a good reputation and being one of the most popular of the non-standard keyboard layouts, it is not mainstream. You can configure a MacBook Pro and choose the keyboard layout on the Apple website. Dvorak is not an available option. You will have a hard time finding a keyboard with the Dvorak layout. Note that iOS doesn’t have a Dvorak keyboard layout in its settings either. MacBook keyboard options One solution is to get a mechanical keyboard with interchangeable keys and manually change the layout. This is a tedious task that you should have to do once eventually. On iOS, you can install a third-party keyboard application to change the keyboard layout and use Dvorak. Keyboard shortcuts Using Dvorak means you will touch-type and not have to look at your keys to type. So you might argue the previous section is not that big of a deal. But keyboard shortcuts are a bit different. When using shortcuts, I often find myself looking down at the keyboard. I’ll look at a tutorial telling me how I need to press Crtl+W, and with my right hand taken by the mouse, I’ll try to push the keys quickly. But where exactly is W? Let me get back into a resting position to figure it out. For that reason, you will need a mental map between the physical keys and the software/Dvorak keys — unless you switch your keys physically too. Even if you have not used QWERTY for a long time, you probably have taken the habit of one of the most used keyboard shortcut combinations: Ctrl+C- Crtl+V. And if you take a look at the QWERTY keyboard, you will see C and V are just next to each other. A few applications have designed their keyboard shortcuts to be as convenient as possible for the user, and they don’t have Dvorak users in mind. One solution is to remap the keys with an AutoHotKey script or a third-party application. Pair programming Typing can be cooperative. It is a required input method. When you are pair programming with someone, you will want to be able to collaborate. You will end up in a situation where you will not have access to a laptop with a Dvorak keyboard layout installed. So be prepared to ask your colleague to install it. If you are in the same situation as me and never learned QWERTY before, you will slow down your teammates. Typing with a QWERTY keyboard is frustrating and can be embarrassing because you will type five times slower. If you want to help someone by typing a few commands, you will have to go back to your desk and send the commands by message. Technical interviews While established companies usually let you use your laptop, that isn’t always the case with startups. And one thing they do not anticipate is someone using something other than QWERTY. If you have spent years typing in Dvorak and your QWERTY speed is at 20 wpm like mine, you have two choices during the interview: Ask them to set up Dvorak on their laptop. Type with QWERTY and let them be surprised by how poorly you can type on a keyboard. I failed 100% of the interviews when I had to ask to change the keyboard layout on the spot. While I don’t think it was the main reason for those failures, it was definitely not a good part of the interview. Interviews are a moment when you should be confident. It can be time-limited and you should be focused on your task. When you are given a MacBook Pro with Ubuntu and no one can figure out how to get Dvorak on it for more than five minutes, you are already in a bad situation. Your solution is to ask about how the technical interview will proceed and if you will need a laptop. If they tell you a laptop will be provided, let them know it needs to be configured to have the keyboard layout on it.
https://medium.com/better-programming/why-you-should-not-switch-to-dvorak-6404d4b75f7b
[]
2020-12-30 18:11:21.353000+00:00
['Technology', 'Programming', 'Software Engineering', 'Software Development', 'Productivity']
1,055
Ironically, digital will save brick-and-mortar retail.
Its undeniable that digital has become an essential component of the shopping journey. Consider the way we use digital devices — before and during a shopping trip. We go on Pintrest to find ideas, Amazon to check out product reviews and compare prices, Yelp to find a good store. Often, by the time we set foot in an actual store, we’ve already decided what to buy. It certainly presents a challenge for retailers who are getting left out of the online decision-making moments. But it also means there’s a huge opportunity for retailers to support their bricks-and-mortar by using digital in creative ways. 1. Creating inspiration Websites are a powerful storytelling medium. When it comes to commerce, its a way to spark people’s imaginations — what will a product allow them to do? What kind of lifestyle will it create? Websites allow retailers to showcase products as a part of a complete context or lifestyle. If you’re selling an armchair, you can show the dream living room it creates. If you’re selling shoes, you show the outfit that could go with it. This helps people imagine how a product could fit into their lives. Second, digital allow people to customize their browsing experience according to personal taste. Every shopping choice is an exploration of the question “What kind of person am I?”. You can give people the chance to discover themselves by making choices on your site. For example, Mercier Flooring lets people browse by choosing a collection that matches their personal taste. Are you “Sophisticated & Trendy” or “Pure & Natural”? Are you Classic or Contemporary? These tactics encourage website visitors to explore much more than just the product they came for. 2. Product Information People spend a significant amount of time researching a product online before they commit to a purchase. Digital allows retailers to be involved in this part of the decision-making process. Including product reviews, comparisons or in-depth profiles of products helps retailers play a bigger role in the key moments of the shopping journey. Providing product information not only attracts customers, it also increases the amount customers spend. Shoppers who’ve researched product choices extensively tend to buy more, or choose a more expensive option. 3. Social Media Integration Shoppers who use social media are: 29% more likely to make a purchase the same day. 4x more likely to spend more on the purchase. (Data from Deloitte Digital) Websites allow retailers to start and participate in discussions around their company by allowing people to tweet and post directly from the site. Starting third party conversations around products is a great marketing strategy because messages from individuals will always be deemed more trustworthy than those associated with a brand. 4. Content Creation Many people who come to browse are not be ready to commit to a purchase. But these visitors are still worth hanging on to, however, because they can become loyal customers over a longer period of time. So how do you keep these people involved with a brand without directly selling to them? The answer is — by being an expert. Customers recognize the value of experience and know-how in a specific sector. Websites give retailers the chance to publish content that gives them credibility and visibility. And while their opinion on a specific product might be suspected of having a commercial incentive, customers tend to trust opinions that are given on the industry as a whole. 5. Showcasing Alternatives and Complements Digital offers a unique opportunity for retailers to target customers who have already expressed an interest in a specific product. If a customer finds that one product is not exactly right, a website allows other similar alternatives to be seamlessly presented to them. As well as providing suitable alternatives, each product can linked to items which might complement that purchase. For example, if someone is looking at a printer, chances are they will also need to buy ink cartridges. Expanding online offerings with these features is not a way of replacing in-store transactions. Rather, it’s a chance for retailers to enrich the brick-and-mortar shopping trip with more enjoyable and convenient experiences. Now that digital sources influence more than half of all in-store purchases, retailers can no longer ignore the need to invest in a website.
https://medium.com/code-mortar/ironically-digital-will-save-brick-and-mortar-retail-b22d4f4d1e72
['Code']
2017-07-18 17:32:58.660000+00:00
['Digital Transformation', 'Ecommerce', 'Retail Technology']
1,056
Why You Need to Accept Mobile Wallets at Your SMB
Shoppers are progressively utilizing computerized wallets like Apple Pay at checkout. Get familiar with this installment strategy and how your independent company can acknowledge it. The utilization of portable wallets is on the ascent. A portable wallet is characterized as any cell phone fit for making monetary exchanges. Numerous cell phones presently remember versatile wallets as a worked for highlight animesprout. You can work with your Visa processor to acknowledge versatile wallet installments. This article is for entrepreneurs who are thinking about tolerating portable installments. At the point when Apple Pay was presented in 2014, numerous individuals laughed at the possibility that a cell phone could supplant money and Mastercard exchanges for the purpose of procurement. Today, portable installments are on the ascent — and expected to outperform $250 billion by 2024, as indicated by a Global Market Insights Inc. report. A few components are uniting to drive this development: the expansion of cell phones (about 96% of Americans use them), empowering innovation, way of life changes, interest for improved client encounters, and the requirement for quick, simple, and secure exchanges. Recent college grads are as of now the biggest crowd for versatile installments — almost a large portion of the individuals in this age range report utilizing a portable wallet. In the course of the most recent couple of years, top innovation trailblazers like Apple, Google, and Samsung have progressed the portable installment industry by presenting cutting edge versatile installment applications, making portable installments more open to more customers like animesprout. Shipper upholds for innovation has additionally expanded, as most new charge card perusers and retail location (POS) terminals can acknowledge portable wallets and other contactless installments. For independent companies, tolerating portable installments can improve the client encounter and smooth out cycles, to give some examples of benefits. Some industry specialists state that embracing versatile installment innovations is one approach to the future-verification of your business. Yet, does it bode well for private ventures to jump aboard now? To assist you with gauging the upsides and downsides, here’s a diagram of portable installments, possible advantages, and the innovation important to help them. What are portable wallets? As a wide definition, a versatile wallet includes any innovation that transforms your cell phone into a wallet equipped for making monetary exchanges. This can likewise include making charge card installments, depending on close field correspondence innovation (“tap to pay”), and regularly incorporates motivating forces for customers like devotion projects and coupons. The unmistakable preferred position is “contactless installment,” which normally includes NFC innovation. Telephones, for example, the Samsung Galaxy S10 use NFC so you don’t need to swipe a Mastercard; you essentially place the telephone on a peruser that checks the QR code for the client’s card. Versatile wallets work in the store for private company exchanges, however, they can likewise be utilized for online installments. It’s a route for clients to try not to convey a genuine wallet or handbag, utilizing one gadget for all installments. Obviously, the versatile wallets complete the exchange with the client’s current charge card. For instance, they can interface Apple Pay to their bank card or Mastercard. Touchy card information is supplanted with scrambled tokens for additional security. With different computerized wallet applications, a cell phone can be utilized to make installments, record and recover dedication focuses, supplant paper tickets, pass on close to home recognizable proof, and send accreditations that award admittance to make sure about entryways and rooms. Key takeaway: Mobile wallets let clients utilize their cell phones to make installments on the web or in-store by means of tap-to-pay or QR code filters. How protected are versatile wallets? Versatile wallets have preinstalled security highlights intended to prevent any other individual from utilizing the record. While charge cards are anything but difficult to sweep or take, portable wallets are anything but difficult to monitor — since a great many people generally know where their telephones are — and incorporate encryption innovation. They likewise offer discretionary safety efforts to keep undesirable clients from utilizing the portable wallet application, for example, a necessary face filter, unique mark, PIN, or password. Versatile wallets are more enthusiastically to take from on the grounds that they are more earnestly to recreate. For the most part, individuals cling to their telephones more than they do their wallets. On the off chance that somebody loses their wallet, their money is no more. In the event that somebody loses a telephone, the lock on the telephone and the application ensure against burglary. There are some intrinsic dangers in depending exclusively on a portable wallet for installments, obviously. The telephone could pass on right at the purpose of procurement, a bug could keep the footing from working or make the QR code muddled, and there’s consistently the possibility of client blunder. (Thus, you should give your clients the same number of installment choices as you can.) Despite the fact that there’s been a ton of worry about security, versatile installments are safer than different types of installment. Clients don’t need to stress over leaving a charge card behind at the terminal, and since the information for portable installment is encoded, the danger of information robbery is lower. This can fabricate more grounded trust among dealers and clients. Also, numerous telephones use unique mark recognizable proof to check a buy. How do versatile wallets bring in cash? The applications’ financial accomplices (i.e., the banks that have the clients’ associated installment cards) pay the versatile wallet organizations a little level of each buy their clients make through the application. For instance, Apple acquires 0.15% of every Apple Pay exchange. For distributed installments through Venmo, the application charges the payer a level of the exchange’s worth (in the event that they’re paying with a Visa). Organizations that acknowledge Venmo installments accept this expense and pay a 2.9% charge for every exchange. How would you acknowledge portable wallet installments? Setting up portable installments for your business is for the most part quick and moderate. To start with, you need to pick a Mastercard processor that upholds portable installments. There are several installment handling organizations available, and the very best ones can set you up to acknowledge computerized wallets. In the event that you as of now have an installment processor, call your rep and ask what you need to do to acknowledge versatile installments and computerized wallets — it could be as basic as moving up to another charge card peruser that empowers NFC. The card peruser or terminal should cost you close to $500; contingent upon the versatile installment supplier, it may even be free. In case you’re not yet tolerating Visas, consider working with a portable Mastercard processor like Square or PayPal, as it’s snappy to set up a record with them, forthright expenses for preparing equipment are insignificant, and there is no month to month or yearly record charges.
https://medium.com/@animesprout/why-you-need-to-accept-mobile-wallets-at-your-smb-c72df7f0a1d1
['Anime Sprout']
2020-12-26 09:20:17.719000+00:00
['Trending', 'Animesprout', 'Anime', 'Technology']
1,057
There’s About to Be a Lot More Serial Killers in the World
The Origins of Unmanned Aerial Vehicles The military history of these unmanned aerial vehicles (UAVs) goes back further than you might think. In 1849, Austrian forces had the water city of Venice surrounded. But their progress was slow and Captain Ludwig Kudriaffsky had an idea to fasten their surrender. Burning balloons. Note, these are probably not suitable for your child’s next birthday party. As for a gender reveal party — if you want to follow recent idiotic trends — then maybe. Fast forward to WWI when British Archibald Low created the world’s first remote-operated airplanes. While the guise was to use them for anti-aircraft target practice, the real purpose was to attach big fat bombs on their front before kamikazing into enemies. While these planes were never employed to do so, their development went far. He even went on to create the world’s first guided rocket in 1917. While the UK government wasn’t too excited about his spiffy new tech, the Germans seemed to be aware of the potential mass damage they could cause. So much so that they tried to assassinate him twice, and failed. In WWII, the V-1 rockets were largely based on Archibald’s designs, at least initially. Fortunately for the Allies, these were only deployed near the end of the war in June of 1944. Even still, the Germans managed to launch 9,521 of the mechanical beasts into the UK from France’s invaded shores. Of these, 2,340 managed to hit London, killing at least 5,475 people and causing over 16,000 deaths. Thankfully they weren’t deployed at the beginning, or that number would be drastically higher.
https://theapeiron.co.uk/theres-about-to-be-a-lot-more-serial-killers-in-the-world-700ea9500e5f
['J.J. Pryor']
2021-06-15 15:01:16.778000+00:00
['Politics', 'Internet of Things', 'Drones', 'Technology', 'War']
1,058
Bitcoin & Banks: The Forbidden Rommance
Bitcoin & Banks: The Forbidden Rommance Blockchain era tried to introduce new concepts on the very traditional and conservative banking industry, but has faced strong resistance. What’s the reason behind these prohibitions and what can be done against it? When Satoshi Nakamoto published Bitcoin white-paper, in the Halloween of 2008 [ref], not every reader was able to understand how disruptive and powerful the subject on Nakamoto’s email was.[ref] Years later, Bitcoin got people and media’s attention, but the idea that crypto-currencies would overtake the entire financial system quickly found itself full of resistance. Several countries’ restrictions provided Bitcoin an illegal feel. Quickly, people started associating it to criminals or even some Ponzi scheme. Time never stops and, recently, DEA admitted that illegal activity is no longer the majority of Bitcoin transactions. [ref]. Things have changed in the dynamic but recent crypto-currencies environment after 9 years of existence. The number of individuals who knows Bitcoin as a currency rather than some complicated technology meant to be used by computers-related people has increased, it’s possible to find Bitcoin ATM machines and also restaurants that accept Bitcoin, depending on where you are. Bitcoin prepaid cards have also been issued, although in most of the world, it suffered strong prohibitions originated from the fight between banking institutions and crypto-currencies’ users. Huge payments processing players such as Visa, MasterCard and Discover have publicly shared their views about crypto-currencies, against the technology as a payment method. Finding a prepaid card that worked world-wide and could be used to spend crypto-currencies was once an easy task. Known card issuers have been forbidden to continue their operations in most of the world, leaving many crypto-enthusiasts with no option for spending their coins in daily costs. [ref] While payments processing institutions show no support to crypto-currencies, banking companies in several countries follow the same path. Even in countries where crypto-currencies are legalized or not regulated, exchanges and individual traders are constantly in trouble with their bank accounts being cancelled. Banks ban anyone who sends or receives transactions from known exchanges and P2P sellers. [ref] While it’s easy to understand why some governments aren’t bitcoin-friendly (since its nature disallows State corporations to control economics on its root), we might ask ourselves the reason behind all the campaign initiated by financial institutions against crypto-currencies. Visa processes around 1,700 tps (transactions per second) with peaks of 4,000 tps [ref] and, in theory, it can scale for up to 52k tps [ref]. Bitcoin’s current implementation is able to process 7 tps. Is Visa really worried about people buying pizza with bitcoin? Is the obsolescence of the standard payment systems the reason why banks fear Bitcoin? Probably, not. A look at the freedom that is available to crypto-currencies’ users to transact funds to any wallet, regardless of which border the destination is behind, may reveal one of the reasons why banking institutions have felt threatened by crypto-advent. It’s about the big players. It’s known that banks earn profit on every operation. But the fact is that losing big players and big companies that depends on banking institutions on a daily basis for huge remittances could represent losses for banking companies. In a regular international transfer, banks earn profits both on the currency exchange and on the transaction, itself, since they offer their own exchange rate for the destination’s currency. In business matters, companies that have been paying higher rates for external currencies as well as expensive fees for delayed remittances could have it done instantly, at very lower fees and at a better exchange rate. It’s about the environment. Although it’s too early to say that payments processors fears the usage of crypto-currencies as payment methods, the advantages of crypto-currencies preference in several use-cases are good reasons to say it could surpass services that are only offered by banks. And it’s not only about remittance. Also, banks do not fear specified features from Bitcoin or other crypto-currencies, but the growth of the entire environment around crypto-currencies. The environment is growing fast. We have crypto-backed loans, crypto-powered remittance, smart contract-based escrow, passive incomes (i.e. from master nodes) and several crypto-accepting payment gateways. The global acceptance and usage of these services could mean deficits on banks accounting. And it’s time for cryptos to walk on their own foot, A disruptive technology like crypto-currencies should not rely its success on standard payment processors and banks. Of course, having these on the boat could mean ease in different situations while working with cryptos, but perhaps the mass of enthusiasts still lacks to understand that the technology is powerful enough to walk on its own feet, supported only by the growing environment. "the mass of enthusiasts still lacks to understand that the technology is powerful enough to walk on its own feet […]" Here comes the importance of supporting blockchain-powered projects that replaces services offered by banks. Companies like Capitual, a European startup that aims to raise enough funds to operate as a full banking solution with a decentralized administration, where every token owner can suggest and vote on suggestions, while getting dividends for holding tokens. Capitual’s ICO will start with a usable product and a payments processor, and the funds will allow projects that will widen the crypto-currencies usage, such as ATM machines, a physical card, POS machines, loans, escrow and also SMS banking to allow non-connected communities to send and receive crypto-currencies. Crypto-currencies came to change the monetary system. It’s a matter of time until the crypto-environment is largely used and accepted. It’s capable of changing the world’s economics, but before trying to get the world to believe the crypto-environment, we shall believe it first. A lot has been done, and a lot is still to be done, but technological progress is unstoppable. Fortunately, it is by our side on this battle. — What do banking institutions fear from crypto-currencies? Do you use crypto-related services? Tell us on the comments!
https://medium.com/capitual/bitcoin-banks-the-forbidden-rommance-8c1543fae933
['Jefrey S. Santos']
2018-08-13 22:21:40.625000+00:00
['Bitcoin', 'Cryptocurrency', 'Blockchain', 'Cryptobank', 'Technology']
1,059
Are dual-screen laptops the future of on-the-go systems?
The buzzword around every tech event so far in 2019 is dual-screen laptops. Originally, laptops were all about portability, meaning smaller displays and more compact keyboards. Brands went lighter and lighter in weight, forgoing ports to maximize compactness. As technology improved, brands are starting to fit more features back into small devices, delivering a more immersive experience for users. A couple of years ago, Razer delighted gamers with Valerie, their triple display laptop. We’ve also seen extra large desktop displays catered to gamers, offering a more immersive experience. Along with the go-big-or-go-home displays, brands introduced curved displays into their lineup, promising finer details, even more immersion, and the convenience of a single screen. But, cumbersome systems and huge monitors alienate one major market: those who are always on the go. Dual-screen laptops are all about bringing maximum productivity to those who work just about everywhere. What is a dual-screen laptop? A dual-screen laptop is a laptop that has two screens built into the same unit. This can take on a variety of designs, as we’ve seen from many top brands. The most notable original design comes from ASUS in 2012. The Taichi Windows 8 notebook was ahead of its time; the back had a full HD touchscreen to compliment the main display. Now, brands are doubling down on their double screens by applying even more applications to the secondary display. Who should use a dual-screen laptop? While early designs like the Razer Valerie are designed for gamers, dual-screen laptops have a much broader audience. They’re great tools for creatives, but also practical for any digital nomad, instructor, student, or professional. Your needs will dictate which dual-screen laptop is best for you; they all differ in size and functionality. Which dual-screen laptops are currently available? Here are all the big players in the dual-screen laptop game: – Intel Honeycomb Glacier Hinged Dual-Screen Laptop The Honeycomb Glacier is a hinged dual-screen laptop that is geared especially towards designers and creators. The laptop’s hinge lifts the 15.6-inch main display and props up the 12.3-inch secondary display, which is built into the keyboard tray, into a comfortable angle. The keyboard remains flat, for better productivity.
https://medium.com/the-gadget-flow/are-dual-screen-laptops-the-future-of-on-the-go-systems-26d7333b85b3
['Gadget Flow']
2019-06-10 17:59:16.137000+00:00
['Technology', 'Tech', 'Computers', 'Laptop', 'Gadgets']
1,060
The Ultimate Guide to Trade MITx tokens on Uniswap Exchange — Learn with Morpheus Labs
Here’s a tutorial on how to buy Morpheus Infrastructure Tokens (MITx), or more popularly known as Morpheus Labs tokens (MITx) on Uniswap exchange. Did you know that other than providing Blockchain solutions for other aspiring businesses or individuals, Morpheus Labs has its own token that is being publicly traded online? If you are thinking of investing or getting your hands on cryptocurrencies such as Morpheus Labs tokens (MITx), you would need to trade on reliable centralised exchanges (e.g. CEXes like Kucoin, Binance, Bitmax, etc.) or Decentralised exchanges (DExes) like Uniswap. However, as the old saying goes, “Not your keys, not your tokens”. Decentralised Exchanges offer much better security and ownership to your crypto-assets as you are in full control of your portfolio. Hence, choosing a good crypto exchange is essential for those who care about their cryptocurrency funds. Today we will be looking in-depth on the most popular and go-to Decentralised exchange known as Uniswap, and how you could safely and easily trade MITx on the leading DEX in the space today. Prologue For a start, decentralised exchanges have advantages over their centralised counterparts, such as reducing hacking, mismanagement, hefty fees, and so forth. The main drawback for DEXes includes lack of liquidity which, in crypto terms, means a lack of money volume, making it harder to trade and move markets. Thanks to the growing support from prominent blockchain and Decentralised Finance (DeFi) projects, Uniswap has enjoyed exceptional growth this year. The reason that many projects and investors have hopped onto this particular DEX is that it managed to address the liquidity issue, which can be a gamechanger in the realm of decentralised exchange. Traders can now exchange (or swap) smoothly on Uniswap. Brief Overview of Uniswap What is Uniswap Uniswap is based on the technology of Ethereum (ETH) — one of the world’s most traded cryptocurrencies. The DEX allows sellers and buyers to swap ERC-20 tokens freely without the need for KYC or other privacy-intrusive legal processes. “An ERC-20 token is a blockchain-based asset with similar functionality to bitcoin, ether, and bitcoin cash: it can hold value and be sent and received. The major difference between ERC-20 tokens and other cryptocurrencies is that ERC-20 tokens are created and hosted on the Ethereum blockchain, whereas bitcoin and bitcoin cash is the native currencies of their respective blockchains.” Unlike centralised exchanges, Uniswap’s primary aim is to build the exchange around the community, eliminating fees like the middle-men or platform. Uniswap uses basic demand and calculations that form the crux of the exchange. You can buy, sell or simply provide liquidity to a token of your choice (ie. MITx) using Uniswap. Why Uniswap Uniswap balances out the value of tokens by swapping tokens (ETH / ERC20 — the token you are looking to trade), based on the demand and supply on the exchange. It swaps the token based on how much people want to buy/sell. For instance, when the supply of MITx decreases while ETH rises, this creates a “demand” — a lower pool of MITx will result in a higher price of MITx. Fortunately for the Morpheus Squad, there has already been a contract for MITx on Uniswap, therefore you can continue to trade in the existing pool. A brief guide on buying MITx on Uniswap At the time of writing, Morpheus Labs has a market capitalisation of more than USD $6 million and more than USD $1 million trading volume. Most MITx investors have chosen Uniswap as the preferred exchange platform to trade MITx. Contrary to popular belief, it is possible for anyone to trade on Uniswap. In the following paragraph, we will be going through 4 simple steps which could get you to make your first MITx purchase in no time. Connect & Swap Step 1: On the Top right corner of the Uniswap website, locate the “Connect to Wallet” button, click it to reveal a Popup box as shown in the screenshot below. Select your preferred Ethereum wallet application to connect your wallet with Uniswap. Step 2: Click on the “Swap” menu located at the header, and input the MITx contract address (0x4a527d8fc13c5203ab24ba0944f4cb14658d1db6). If correctly inputted, the MITx icon will then appear. Click on “MITx” to start trading with ETH. IMPORTANT: Anyone can create and name any ERC-20 token on Ethereum, including creating fake versions of existing tokens and tokens that claim to represent projects that do not have a token. Similar to Etherscan, Uniswap automatically tracks analytics for all ERC-20 tokens independent of token integrity. Please do your own research before interacting with any ERC-20 token, and ensure that the contract address is an authentic one (e.g. MITx — 0x4a527d8fc13c5203ab24ba0944f4cb14658d1db6) Step 3: Choose the amount of ETH you want to exchange with MITx and click “Swap”. Step 4: The Swap will commence automatically based on the stated time indicated in the popup message that follows after Step 3. — — These steps shown above are a summary of how you could buy (or Swap) MITx with ETH. If you do wish on swapping MITx for ETH, this four-step process applies as well (with the exception of choosing From MITx to ETH in Step 1). Providing MITx Liquidity Apart from trading on Uniswap, you may also wish to provide liquidity if you do plan on becoming a Liquidity Provider (LP) for the project that you’re rooting for (e.g. MITx). Passive LPs are token holders who wish to passively invest their assets to accumulate trading rewards. Similar to trading MITx, you will start the process by connecting your wallet with Uniswap, followed by clicking on “Pool” & “Add Liquidity” Choose the market that you wish to provide liquidity, or add in the token contract address (ie. MITx Contact Address: 0x4a527d8fc13c5203ab24ba0944f4cb14658d1db6 ) and the amount of token you wish to “Invest”. Once you’ve decided on the amount to stake for liquidity, click “Approve MITx”, “Supply” and finally “Confirm” to confirm the process. Summary For a more elaborated overview of how it works, you might want to check out Uniswap’s official guide. Below is a brief ecosystem map of Uniswap. Why Uniswap is a good choice Decentralised benefits with volatility advantage Trade basically any ERC-20 token, including MITx No permission is needed and contracts can be established Buy, sell or provide liquidity Simple, straightforward math and calculations (basic demand-supply) Drawbacks Basic knowledge of ETH, wallet link is required Have to be based on ETH technology Browser / platform compatibility — — — About Morpheus Labs Morpheus Labs is a leader in Blockchain-Platform-As-A-Service (BPaaS), offering mission-critical tools, infrastructure, various blockchain protocols, and blockchain use case references for enterprises and developers to build, experiment and manage their own applications effortlessly at minimal cost and time. Armed with relevant capabilities, the platform offers a multitude of intuitive solutions that enables developers and enterprises alike to take advantage of its platform to build effective solutions for various use cases. Morpheus Labs’ purpose is to make it easier and cheaper for people to develop blockchain solutions; empowering businesses to solve the unknowns and complexities in blockchain technology.
https://medium.com/morpheus-labs/the-ultimate-guide-to-trade-mitx-tokens-on-uniswap-exchange-learn-with-morpheus-labs-45da8609c7dd
['Morpheus Labs Team']
2020-12-15 10:02:40.361000+00:00
['Blockchain', 'Technology', 'Cryptocurrency']
1,061
Comparative Analysis of Fundamentals and Smart Contract Transactions on EOS and Ethereum Blockchain (Part 2/2)
Disclaimer: public Blockchain technology develops very fast. The information in this article about EOSIO and Ethereum only refers to their status when the article was written. The purpose of this article is to give an overview of the differences between Ethereum and EOSIO. The author supports all public blockchain projects and thinks that the market can accommodate more than one public blockchain platform. To read the first part of the article, please click here. 4 Smart Contract on Ethereum and EOS Blockchain In order to assess the advantages and disadvantages when developing smart contracts on Ethereum and EOS, two smart contracts were built and deployed on the Ethereum Ropsten testnet and EOS Jungle testnet respectively. The first smart contract records location information on the blockchain. The second smart contract deploys a token on the blockchain. These two use cases were chosen for two reasons: · Already existing experience of the author with adding location information on blockchains from previous projects. · These are basic and easy to implement smart contracts, and they represent two of the most common uses cases of blockchains: recording data and public crowdfunding. 4.1 Ethereum 4.1.1 Contract development To develop a simple smart contract that can be deployed on Ethereum, users can start by following a “hello world” example in Solidity to understand the basic principles. It is usually fairly easy to find solutions on the internet if one encounters problems while writing a smart contract. Questions posted on platforms such as Github and Stack Overflow often get answered quickly. Generally, contract development on Ethereum is a fairly easy process, even for people with no or little experience. Due to the large user base and free tutorials, use cases are usually kept up to date. However, this is mainly the case for simple projects such as record data on-chain or token deployment. For more advanced projects, up-to-date tutorials and other information are usually only available in the form of paid tutorials. The code to track location information in a smart contract on Ethereum is as follows: Location smart contract We can see that the contract code is fairly easy to read as it follows the structure mentioned in section 3.1.1. The modifier created in this example is to allow only the owner to execute certain functions. It can also be incorporated into the file. However, in this case, it was put in a separate contract called Ownable.sol, so that it is easier to import the modifier again when writing other contracts. To write a contract to create a token is very complex. In order to simplify the process, an existing library is downloaded and used. As a result, the token contract code looks a lot simpler than the location contract: Token smart contract However, we should always keep in mind that when we use open source code online, there is no guarantee that the code is 100% secure. Extensive tests and code audits should be undertaken to avoid vulnerabilities. 4.1.2 Contract compilation and deployment There are two ways to compile an Ethereum smart contract: in the local code editor or using the online integrated development environment (IDE) Remix. If a smart contract is developed in the local code editor, then the use of Command or Terminal is needed for compilation and deployment. This can be a bit of a challenge for a beginner because knowledge of Truffle is needed. Truffle is a development environment and testing framework for Ethereum which allows local and testnet development and testing of smart contracts. There is a wide selection of tutorials available for the different functions of Truffle. However, for beginners with limited experience, it can be time-consuming to set everything up correctly. By contrast, the online browser-based Ethereum IDE Remix is a very easy to use application that does not require users to manually enter commands. Compilation and deployment of a smart contract happen via the click of a button. Another advantage is due to its debug and unit test functions, users can find errors in their code much faster. However, there are also downsides. IDE Remix does not support front-end testing and development. If a developer wants to connect the smart contract to a front-end user interface, it can only be done using Truffle. This means that the deployment of more complex smart contracts requires users to have a deeper understanding of different development and testing environments. In order to deploy an Ethereum smart contract, an ABI is needed. ABI stands for application binary interface. It connects the user programs and operating system. Machine code can be encoded and decoded through the ABI. The ABI is very important when it comes to contract deployment because contract calls are encoded using .abi files for the EVM. When we want to read data from transactions on the blockchain, the ABI is also essential. An ABI includes function descriptions and events and is in the form of a JSON array (Antonopoulos & Wood, 2016/2018). When the contract is compiled, a JSON file is generated for each of the .sol files under the build/contracts folder, when the contract is developed locally in a code editor. The following figure shows part of the ABI for the location contract written for this article: Figure 10: Part of ABI of Ethereum Location Contract 4.1.3 Cost The cost of deploying smart contracts and sending transactions fluctuates in USD. According to the Ethereum Yellowpaper (Wood, 2020), it costs 32000 Gas to deploy a smart contract plus 200 Gas per byte of the source code. The following diagram shows a comparison between Gas price and Ether price. The upper chart shows the average daily Gas price in Gwei (1 gwei = 10^(-19) Ether). The lower chart shows the price of Ether in USD. We can see that the price of both Gas and Ether fluctuates a lot, resulting in unstable dollar costs of deploying and interacting with smart contracts. Figure 11: Gas price vs Ether price (Ethereum Price, Marketcap, Chart, and Info, n.d.; etherscan.io, n.d.) Figure 11 shows the details of the location recording contract that was deployed on the Ethereum blockchain. We can see the fee is not high. Even if calculated with the all-time-high Ether price at the beginning of 2018, which was approximately 1400 USD per Ether (see figure 10), the deployment cost would still be under 1 USD (0.00055928 * 1400 = 0.78). Figure 12: Location contract deployment details As part of the analysis, two locations were recorded on the testnet. Below is the information on the transactions. For an easy comparison, the details of the locations were added on the bottom right corner of each screenshot and the fees used in each transaction were highlighted. Figure 13: Block showing location 1 is recorded Figure 14: Block showing location 2 was recorded We can see that the cost of recording the first location is significantly higher but converted to USD, the fee paid to record each location is still low. On average approximately 0.15 USD for each transaction. For a company that needs to track the location of 1000 machines and record the location five times a week, it would cost more than 700 USD assuming Ether price stays at a stable level of 1400 USD. If the price of Ether is 350 USD, then the total cost would only be a little more than 200 USD. 4.2 EOSIO 4.2.1 Contract development Because EOSIO smart contracts use C++ which is familiar to many developers. This makes it easy to learn for many experienced users how to program a smart contract on EOSIO. A simple smart contract in C++ includes two files: .hpp and .cpp, where the .hpp file is where the contract is defined and includes all the function names and class. The .cpp file includes all the logic of the functions listed in the .hpp file. In EOSStudio example contracts such as eosio.token and eosio.system are provided for references. EOS was launched only a few years ago and saw several updates since then. However, many of the free tutorials being available are out of date and are not compatible with earlier versions of the network. As with Ethereum, access to most of the more advanced tutorials or code examples is paywalled. The location contract written for this article includes three files: location.hpp, location.cpp and info.hpp. Below is the location.hpp file where the contract name is defined and functions are listed as actions. Below is the location.cpp file where all functions are written. The details of the table were put in another file called info.hpp, where “info” is the name of the table: Compared with the Ethereum Solidity contract, we can see that the EOS smart contract has a more complicated structure because there are more files to maintain. Also, the C++ code is not as easily readable as Solidity code for someone who has not much knowledge in C++. 4.2.2 Contract compilation and deployment EOSIO provides a graphic IDE called EOSStudio which can be downloaded for contract development, compilation, and testing. At the moment it only supports local node and cannot be connected to testnets yet. During the contract compilation process, two files will be generated: an .abi file which is similar to the Ethereum JSON file, and a .wasm file which is a binary file. There are two ways to deploy a smart contract on a testnet. The first is through a terminal, the second is through uploading files on a website. When deploying through the terminal, it has to be done in a Linux or macOS operating system. This can be an additional hurdle for beginners with only a Windows operating system because a virtual machine is needed or an additional Linux system needs to be installed. Also, it is worth noting that each account in EOS can only be used to deploy one smart contract. If a second contract is deployed with the same account, then the previous deployed smart contract will be removed. A useful aspect of EOS is that the data is organized in tables due to its smart contract structure. One can see the table from the block explorer directly. However, this can also be a downside, because when recording sensitive information, encryption is needed. Figure 15: Location contract table on Jungle Testnet 4.2.3 Cost As mentioned in the first section, there is no transaction fee on EOS. However, when deploying a smart contract, the contract owner not only needs to buy RAM on the EOS network, staking EOS for both CPU and NET is also necessary. It is difficult to estimate how much RAM to buy and how much to stake for inexperienced users. The screenshots below show how much resources are used for deploying a location contract and token contract respectively. The total amount of RAM was bought with 30 EOS. 1 EOS was staked for CPU and NET respectively. Figure 16: Resources used for deploying location contract on EOSIO Figure 17: Resources used for deploying token contract on EOS RAM price also fluctuates. In the following diagram, the upper chart shows RAM price in EOS, the lower one shows EOS price in USD from the time frame March 2019 to September 2020. We can see that EOS RAM price correlates with EOS price. Figure 18: EOS RAM price vs EOS price (EOS Authority, n.d.; EOS Price, Marketcap, Chart, and Info, n.d.) 4.3 Advantages and Disadvantages A summary of the advantages and disadvantages of Ethereum and EOS is shown in the table below: Table 1: Advantages and Disadvantages of Ethereum and EOS 5 Conclusion and Outlook In practice, efficient development and implementation of smart contracts for users with limited programming knowledge are based on two factors: · the availability of full, up-to-date tutorials, sample codes, and other resources · the availability of stable environments to implement and test smart contracts on different platforms with little effort In both cases, Ethereum, with its larger ecosystem and more sophisticated solutions, leads the way. All been said, blockchain and smart contract development are still at a very early stage. There are still a lot of problems to be solved before they can reach wide adoption. However, the technology is also experiencing fast developments every day. In Ethereum 2.0 and EOS 3.0, we can look forward to solutions in the following aspects: · Sharding in Ethereum 2.0, which aims to achieve increased scalability without sacrificing much decentralization. Increased scalability also means higher transaction speed. · Interoperability and cross-chain solution in EOS 3.0: one blockchain can validate what happened on another blockchain. · Better user experience, such as easier to create accounts on EOS, lower transaction fees, better liquidity, which means users can buy EOS more easily. · More resilient: when a considerable number of nodes go offline, the network still functions normally. I hope this article provided a clear comparison for people who are undecided about whether to use Ethereum or EOS. We should keep in mind that there is no “best” blockchain. The market needs decentralized blockchain projects with different characteristics, such as consensus algorithms, fundamental architecture, programming languages, etc. Internet and smartphones have transformed our lives. Blockchain will do the same. As the founder of Ethereum Vitalik Buterin once said: “instead of putting the taxi driver out of a job, blockchain puts Uber out of a job and lets taxi drivers work directly with the customers.” References Accounts And Permissions. (n.d.). EOS Developers Documentation. Retrieved July 30, 2020, from https://developers.eos.io/welcome/latest/protocol/accounts_and_permissions, https://developers.eos.io/welcome/v2.0/protocol/accounts_and_permissions Alharby, M., & van Moorsel, A. (2017). Blockchain Based Smart Contracts: A Systematic Mapping Study. 125–140. https://doi.org/10.5121/csit.2017.71011 Antonopoulos, A. M., & Wood, G. (2018). Mastering Ethereum. O’Reilly Media. https://github.com/ethereumbook/ethereumbook (Original work published 2016) Block, E. C. (2020). Bloks.io. https://bloks.io Bosamia, M., & Patel, D. (2018). Current Trends and Future Implementation Possibilities of the Merkel Tree. JCSE International Journal of Computer Sciences and Engineering, 6(2018–8–8). https://doi.org/10.26438/ijcse/v6i8.294301 Brickwood, D. (2018). Understanding Trie Databases in Ethereum. Medium. https://medium.com/shyft-network-media/understanding-trie-databases-in-ethereum-9f03d2c3325d Buterin, V. (2013). A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM. 36. Chadha, B. (2018). Understanding Vyper: The slick New Ethereum language. Blockgeeks. https://blockgeeks.com/guides/understanding-vyper/ Cleos. (n.d.). EOS Developers Documentation. Retrieved July 30, 2020, from https://developers.eos.io/manuals/eos/latest/cleos/index, https://developers.eos.io/manuals/eos/v2.0/cleos/index Dataflair Team. (2020). Advantages and Disadvantages of C++. https://data-flair.training/blogs/advantages-and-disadvantages-of-cpp/ EOS Authority. (n.d.). Manage CPU/NET/RAM. EOS Authority. Retrieved September 22, 2020, from https://eosauthority.com/wallet/ram EOS New York. (2018). The EOS Mainnet Launch: A New Dawn. https://medium.com/@eosnewyork/the-eos-mainnet-launch-a-new-dawn-fa0b5d0fea06 EOS New York. (2019). Managing your EOS Owner & Active Permissions. Medium. https://medium.com/eos-new-york/managing-your-eos-owner-active-permissions-c76bdaf24e6b EOS price, marketcap, chart, and info. (n.d.). CoinMarketCap. Retrieved September 22, 2020, from https://coinmarketcap.com/currencies/eos/ Ethereum 2.0 Phases. (n.d.). EthHub. Retrieved August 19, 2020, from https://docs.ethhub.io/ethereum-roadmap/ethereum-2.0/eth-2.0-phases/ Ethereum price, marketcap, chart, and info. (n.d.). CoinMarketCap. Retrieved September 22, 2020, from https://coinmarketcap.com/currencies/ethereum/ etherscan. (2020). Ethereum Blockchain Explorer. Ethereum (ETH) Blockchain Explorer. http://etherscan.io/ etherscan.io. (n.d.). Ethereum Average Gas Price Chart | Etherscan. Ethereum (ETH) Blockchain Explorer. Retrieved September 22, 2020, from http://etherscan.io/chart/gasprice Franco, P. (2015). Understanding Bitcoin. https://www.academia.edu/36423280/_Pedro_Franco_Understanding_Bitcoin_Cryptography_BookSee.org_ Hafid, A., Hafid, A. S., & Samih, M. (2020). Scaling Blockchains: A Comprehensive Survey. IEEE Access, 8, 125244–125262. https://doi.org/10.1109/ACCESS.2020.3007251 Keosd. (n.d.). EOS Developers Documentation. Retrieved July 30, 2020, from https://developers.eos.io/manuals/eos/latest/keosd/index, https://developers.eos.io/manuals/eos/v2.0/keosd/index Kim, K. (2018). Modified Merkle Patricia Trie — How Ethereum saves a state | by Kiyun Kim. Medium. https://medium.com/codechain/modified-merkle-patricia-trie-how-ethereum-saves-a-state-e6d7555078dd Konstantopoulos, G. (2017). Understanding Blockchain Fundamentals, Part 1: Byzantine Fault Tolerance. Medium. https://medium.com/loom-network/understanding-blockchain-fundamentals-part-1-byzantine-fault-tolerance-245f46fe8419 Kumar, V. (2020). WebAssembly: Easy explanation with code example. Medium. https://medium.com/front-end-weekly/webassembly-why-and-how-to-use-it-2a4f95c8148f Larimer, D. (2018). EOS.IO Technical White Paper V2. GitHub. https://github.com/EOSIO/Documentation LiquidEOS. (2018). CPU, NET & RAM — The raw materials of the EOS economy. Medium. https://medium.com/@liquideos/cpu-net-ram-the-raw-materials-of-the-eos-economy-c4f85022fae McCallum, T. (2018). Diving into Ethereum’s world state. Medium. https://medium.com/cybermiles/diving-into-ethereums-world-state-c893102030ed Nodeos. (n.d.). EOS Developers Documentation. Retrieved July 30, 2020, from https://developers.eos.io/manuals/eos/latest/nodeos/index, https://developers.eos.io/manuals/eos/v2.0/nodeos/index Platform And Toolchain. (n.d.). EOS Developers Website. Retrieved July 20, 2020, from https://developers.eos.io/welcome/latest/overview/platform_and_toolchain, https://developers.eos.io/welcome/v2.0/overview/platform_and_toolchain Polar. (2018). What is CPU? What is RAM? And How Does the EOS blockchain Utilize These Resources? Medium. https://medium.com/@polar_io/what-is-cpu-what-is-ram-and-how-does-the-eos-blockchain-utilize-these-resources-a7a52e158652 Potter, J. M. (2018). The Problem with Solidity. Medium. https://medium.com/@XBY_Today/the-problem-with-solidity-be7e6c277a58 Priyadarshini, M. (2018, November 16). The Huge Security Problem With C/C++ And Why You Shouldn’t Use It. Fossbytes. https://fossbytes.com/security-problem-with-c-c-and-why-you-shouldnt-use-it/ QuillHash Team. (2019). EOS Smart Contract Development: Understanding fundamental concepts for writing dApps on EOS. Medium. https://medium.com/quillhash/eos-fundamentals-for-developers-essential-concepts-for-starting-eos-development-9d8e1a263724 Release Notes — Vyper documentation. (n.d.). Vyper Documentation. Retrieved August 10, 2020, from https://vyper.readthedocs.io/en/stable/release-notes.html Rosic, A. (n.d.). What Is Hashing? [Step-by-Step Guide-Under Hood Of Blockchain]. Https://Blockgeeks.Com/. Retrieved June 11, 2020, from https://blockgeeks.com/guides/what-is-hashing/ Rosic, A. (2018). What is Ethereum Gas? [The Most Comprehensive Step-By-Step Guide Ever!]. Blockgeeks. https://blockgeeks.com/guides/ethereum-gas/ Solidity — Solidity 0.7.1 documentation. (n.d.). Solidity Documentation. Retrieved August 23, 2020, from https://solidity.readthedocs.io/en/latest/index.htm Structure of a Contract — Solidity 0.7.1 documentation. (n.d.). Solidity Documentation. Retrieved August 23, 2020, from https://solidity.readthedocs.io/en/latest/structure-of-a-contract.html Swamy, A. (2018). Pros and Cons of Solidity. SixPL. https://sixpl.com/pros-and-cons-of-solidity/ Szabo, N. (1997). The Idea of Smart Contracts. https://www.fon.hum.uva.nl/rob/Courses/InformationInSpeech/CDROM/Literature/LOTwinterschool2006/szabo.best.vwh.net/idea.html Transactions Protocol. (n.d.). EOS Developers Documentation. Retrieved August 9, 2020, from https://developers.eos.io/welcome/latest/protocol/transactions_protocol, https://developers.eos.io/welcome/v2.0/protocol/transactions_protocol Visa Fact Sheet. (2017, August). https://usa.visa.com/dam/VCOM/download/corporate/media/visanet-technology/aboutvisafactsheet.pdf Voshmgir, S. (2019). Oracles. In Token Economy: How Blockchains and Smart Contracts Revolutionize the Economy. https://www.amazon.de/gp/product/3982103827/ref=as_li_tl?ie=UTF8&tag=sherminde01a-21&camp=1638&creative=6742&linkCode=as2&creativeASIN=3982103827&linkId=4753362f5226a8282b281de67ec80077 What Is a Blockchain Consensus Algorithm? (n.d.). Binance Academy. Retrieved August 19, 2020, from https://academy.binance.com/blockchain/what-is-a-blockchain-consensus-algorithm Wood, G. (2020). ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER. 39. Zhang, E. (2019). Roadmap of NEO 3.0 Development. Medium. https://medium.com/neo-smart-economy/roadmap-of-neo-3-0-development-e2ae64edf226 Zhang, R., Rui Xue, & Liu, L. (2019). Security and Privacy on Blockchain. ArXiv:1903.07602 [Cs]. https://doi.org/10.1145/3316481
https://medium.com/@ren-heinrich/comparative-analysis-of-fundamentals-and-smart-contract-transactions-on-eos-and-ethereum-4b070ead578
['Ren']
2020-10-26 20:42:26.661000+00:00
['Blockchain', 'Eos', 'Smart Contracts', 'Blockchain Technology', 'Ethereum']
1,062
GIS, Its problems and potential solution approaches in the IT profession
“A geographic information system (GIS) is a computer system for capturing, storing, checking, and displaying data related to positions on Earth’s surface”. GIS can show many different kinds of data on one map, such as streets, buildings, and vegetation. This enables people to more easily see, analyze, and understand patterns and relationships. (Illustration courtesy U.S. Government Accountability Office) “A geographic information system or GIS integrates data, hardware, software, and GPS to assist in the analysis and display of geographically referenced information.” (Lifewire, 2017) “An information system that is designed to work with data referenced by spatial or geographic coordinates. In other words, a GIS is both a system with specific capabilities for spatially-referenced data, as well as a set of operations for working [analysis] with the data.”(Star and Estes, 1990) In my view, GIS can be summarized as “A geographic information system (GIS) lets us visualize, question, analyze, and interpret data to understand relationships, patterns, and trends.” GIS in detail Spatial data represents the location, size, and shape of an object on planet Earth such as a building, lake, mountain, or township. Spatial data may also include attributes that provide more information about the entity that is being represented. Geographic Information Systems (GIS) or other specialized software applications can be used to access, visualize, manipulate, and analyze geospatial data. Microsoft introduced two spatial data types with SQL Server 2008: geometry and geography. Geometry types are represented as points on a planar, or flat-earth, surface. An example would be (5,2) where the first number represents that point’s position on the horizontal (x) axis and the second number represents the point’s position on the vertical (y) axis. Geography spatial data types, on the other hand, are represented as latitudinal and longitudinal degrees, as on Earth or other earth-like surfaces. The availability of spatial databases and widespread use of geographic information systems has stimulated increasing interest in the analysis and modeling of spatial data. Spatial data analysis focuses on detecting patterns, and on exploring and modeling relationships between them in order to understand the processes responsible for their emergence. In this way, the role of space is emphasized, and our understanding of the working and representation of space, spatial patterns, and processes is enhanced. In applied research, the recognition of the spatial dimension often yields different and more meaningful results and helps to avoid erroneous conclusions. THE APPLICATION APPROACH A slight modification of the process-oriented approach yields a definition which categorizes GIS according to the type of information being handled. For example, Pavlidis’ classification scheme includes natural resource inventory systems, urban systems, planning and evaluation systems, management command and control systems, and citizen scientific systems (Pavlidis,1982). Applications in forestry may cut across several of these categories but are primarily concerned with inventory, planning, and management. An area of greatly increased attention in the field of land records, or multi-purpose cadaster, systems that use the individual parcels as basic building blocks (McLaughlin,1984). While defining GIS on the basis of applications may help to illustrate the scope of the field, it does not enable one to distinguish GIS from other forms of automated geographic data processing. Geographic information systems are independent of both scale and subject matter. THE PROCESS-ORIENTED APPROACH Process-oriented definitions, based on the idea that an information system consists of several integrated subsystems that help convert geographic data into useful information were formulated originally in the early 1970s by Tomlinson and others (Catkins and Tomlinson, 1977). Logically, the entire system must include procedures for the input, storage, retrieval, analysis, and output of geographic information. The value of such systems is determined by their ability to deliver timely and useful information. Although the intentions of this process-oriented definition are quite clear, the application of the definition is far too inclusive to help distinguish GIS from computer cartography, location-allocation exercises, or even statistical analysis. By applying such a broad definition one could argue that almost any successful master’s thesis in geography involves the creation of an operational GIS. Similarly, the production of an atlas also would seem to include all the necessary subsystems of a GIS. A process-oriented definition is, however, an extremely valuable frogman organizational perspective, as well as for establishing the notion that a system is something that is dynamic and should be viewed as a commitment to long term operation. Finally, any form of the process-oriented definition of GIS emphasizes the end use of the information and, in fact, need not imply that automation is involved at all in the processing (Pokier, 1985), THE TOOLBOX APPROACH The toolbox definition of GIS derives from the idea that such a system incorporates a sophisticated set of computer-based procedures and algorithms for handling spatial data. Published works by Tomlinson and Boyle (1981) and Dangermond (1983), for example, provide very complete delineations of the operational software functions that one should find in a full-featured GIS. Typically, these tools are organized according to the needs of each process-oriented subsystem (e.g., input, analysis, or output). The toolbox definition implies that all of these functions must be present and should work together efficiently to enhance the transfer of a variety of different types of geographical data through the system and ultimately into the hands of the end-user. Therefore, even though they are important components of automated geography, neither digitizing, image processing, nor automated mapping systems qualify as GIS because they do not possess all the necessary tools and do not provide the overall integration of functions. While checklists are very useful for evaluating different systems, they fail to provide a viable definition of the field. THE APPLICATION APPROACH A slight modification of the process-oriented approach yields a definition which categorizes GIS according to the type of information being handled. For example, Pavli is’ classification scheme includes natural resource inventory systems, urban systems, planning and evaluation systems, management command and control systems, and citizen scientific systems (Pavlidis,1982). Applications in forestry may cut across several of these categories but are primarily concerned with inventory, planning, and management. An area of greatly increased attention in the field of land records, or multi-purpose cadaster, systems that use the individual parcels as basic building blocks (McLaughlin, 1984). While defining GIS on the basis of applications may help to illustrate the scope of the field, it does not enable one to distinguish GIS from other forms of automated geographic data processing. Geographic information systems are independent of both scale and subject matter. THE DATABASE APPROACH The database approach refines the toolbox definition of GIS by stressing the ease of the interaction of the other tools with the database. For example, Goodchild states, “A GIS is best defined as a system which uses a spatial database to provide answers to queries of a geographical nature. …The generic GIS thus can be viewed as a number of specialized spatial routines laid over a standard relational database management system” (Goodchild, 1985). Piquet would agree that a GIS must start with an inappropriate data model. GIS and CAD The following are the criteria on which we can differentiate CAD and GIS. Modeling CAD models things in the real world. GIS models the world itself. Therefore, GIS uses geographic coordinates systems and world map projections while CAD coordinates are relative to the object being modeled and are not usually relative to any particular place on earth. Objects CAD objects include lines, circles, arcs, text, etc. using layers, blocks, internal data, and dimensions. CAD objects don’t know about each other, even though they may touch or overlap. GIS objects know about each other: • GIS understands networks. For instance, the lines describing streets are related to one another. • GIS understands enclosed areas (polygons) and their associativity with other objects. • GIS understands connectivity, conductivity, and associativity which enable spatial analysis. Topology The primary difference between CAD and GIS is topology. GIS has it, CAD doesn’t. In a CAD environment, the objects (lines, polylines, points, etc.) have no relationships between them. Topology brings these objects together into logical groups to form real-world models. • Node topology allows spatial analysis, such as buffering to determine other objects within a certain range. • Network topology allows modeling of direction and resistance. Path tracing finds the fastest or best route. Flood tracing determines the maximum flow from a given point and network resistance. As with node topology, buffer analysis can be applied to networks too. • Polygon topology enables polygons to have relationships. Polygons also have centroids which can be used to hold data relevant to the polygons. Polygon spatial analysis includes overlay analysis such as determining parcels in a floodplain. Polygons can be “dissolved” using attributes with common values to remove interior lines, in effect aggregating polygons within the same class. Topology and spatial analysis differentiate GIS from CAD. Data Management GIS separates object storage from object display, combining data from multiple sources into a virtual data warehouse. That data can then be used in any number of separately defined analyses or presentations. CAD systems carry baggage such as line color, line width, etc. that is not relevant to the data itself. GIS systems are usually disk-based and can model larger areas than CAD implementations which are usually memory-based. For instance, CAD files are typically smaller, such as product designs as compared to regional, state, or even world models in GIS. The Trend While the distinction between CAD and GIS is grey now, as features are added to CAD systems, the distinction will blur even more. And construction, GIS for initial planning, and layout. The table summarizes the main differences.
https://medium.com/@nirmaladhikari/gis-its-problems-and-potential-solution-approaches-in-the-it-profession-b65dae493746
['Nirmal Adhikari']
2020-09-02 17:08:33.837000+00:00
['Information Technology', 'Solutions', 'Geographic Information', 'GIS']
1,063
Jade Service Runner
Jade Suite The ECLC team is clearly focused on building with the entire p2p landscape in mind. Setting the tone with their first release, OpenRPC, the team is creating the foundational development layer that has been sorely missing from the blockchain ecosystem. Jade suite is going to make building applications on top of ethereum and other p2p technologies easy as using popular web frameworks like rails. It’s not just a set of tools, but also a paradigm for application development. The first fundamental of the paradigm is that users should be able to choose the security model that best suits them. We achieve this by decomposing ethereum into pieces with specific roles, defining a specification for the individual components. The application developer need not concern themselves over the details of which version/vendor of xyz is the user running and where. — Zac Belford, ECLC developer Oh and if you haven’t been keeping up with OpenRPC, the specification now supports Golang. Jade Service Runner Stevan Lohja, Technology Coordinator at ECLC explains “Jade-Service-Runner is a tool developers can use to simply run services in the background. For example, a DApp developer needs mainnet or testnet to deploy dapps against, so they can tell the Service Runner to run Geth or any other service they incorporate into their environment”. It’s this focus on simplicity that the team thinks will resonate with the development community, making the Service Runner (and the entire Jade suite), an essential tool box moving forward. The team is so confident in the positive reception and use of the tool that they are already looking to the future. One of the potential longer term plays is basically allowing jade service runner to be a p2p node that uses open-rpc discover to create a network of decentralized services that people can connect to. The service runner would then basically act as gateway that would let people use each others services or to scope their interactions to just use their local connections. — Zane Starr, ECLC developer Better dApp Development Service Runner improves the dApp development cycle, by reducing the number of steps required for running services that are local to the user, in addition to associated with relying on locally running JSON-RPC services. To do this effectively, Jade Service Runner supports the following: Allows dApp developers to specify what services they’d like to use Provides defaults for the services to run Provides users with an easy installation path Provides reliable discovery of pre-existing services run by the service runner Provides OpenRPC interface to the Service Runnner functionality, as well as the underlying services Allows dApp developers the ability to retrieve reliable JSON-RPC connection information from the service Provides typed interfaces to develop applications against Getting Started Install jade-service-runner using npm npm install -g @etclabscore/jade-service-runner It also has a javascript client: npm install @etclabscore/jade-service-runner-client Then require it into any module. const { ServiceRunner } = require('@etclabscore/jade-service-runner-client'); const ERPC = require('@etclabscore/ethereum-json-rpc'); const serviceRunner = new ServiceRunner({ transport: { type: "http", port: 8002, host: "localhost" } }); const serviceName = 'multi-geth'; const successful = await serviceRunner.installService(serviceName); if (successful === false) throw new Error('Service not installed') const serviceConfig = serviceRunner.start(serviceName, 'kotti'); const erpc = new ERPC(serviceConfig); erpc.getBalance("0x0DEADBEEF"); to run the service runner: jade-service-runner Supported Services Currently it supports multi-geth with the following environments: mainnet (ETC) kotti ethereum goerli rinkeby More Resources for Jade-Service-Runner and OpenRPC
https://medium.com/hackernoon/jade-service-runner-bd5ca222b7fc
[]
2019-06-23 00:18:39.845000+00:00
['Blockchain Development', 'Ethereum', 'Json', 'Ethereum Classic', 'Blockchain Technology']
1,064
ODS breakfast is coming to your city! Paris, and…
ODS breakfast is coming to your city! Paris, and… …it depends on you! This post is a celebration of Parisian Open Data Science (ODS) breakfasts, weakly unofficial gatherings about Data Science and life, held on Saturdays at Malongo Cafe (near Place Saint-Michel). Thanks for the logo to Yulia Dembitskaya Open Data Science breakfasts are available not only in Paris, but in many cities across the world. At a Data Science breakfast, you can meet a fellow Data Scientist over coffee to talk about Data Science and anything else. At a Parisian Data Science Breakfast, you can meet people from both industry and academia. You will find diverse expertise in Data Science and beyond. Collectively, we have knowledge in Mathematics and Physics, Computer Science, Quantitative Finance and Trading, Neuroscience, Spiking Neural Networks, Deep Learning, Optimization, Statistics, Network Science, Recommender Systems, DevOps, Software and Machine Learning Engineering and more. Read on to find out about all kinds of the brain food that you can get from ODS breakfasts and learn community building lessons. Where it all began From Russia with love Data Science Breakfasts tradition started in Moscow circa 2013. People gathered to grab an early breakfast and to talk about data science and life before work. Data Science Breakfast tradition spread to other cities in Russia and other countries. Data Science Breakfasts are backed by a strong international community — Open Data Science (ODS). ODS unites all researchers, engineers and developers around Data Science and related areas. To quote, https://ods.ai/en/, the goals of the community are: 🔥 Create awesome projects, events and educational courses 💪 Share experience, developing each other’s skills 📈 Promote open Data Science and push the field forward Along with Open Data Science Breakfasts, ODS organizes multiple events across Russia and abroad. ODS slack is in Russian, but the quality of the content is so good so that many non-Russian-speaking people are reading it with translation. Luckily, there is an English-language Telegram channel https://t.me/opendatascience that aggregates major Data Science news (it is still a fraction of what you can find on slack). Parisian ODS breakfasts are announced on https://t.me/opendatascience and on ODS slack (http://opendatascience.slack.com/, get an invite at https://ods.ai/en/) in #_meetings_france channel. You can find all kinds of information about Parisian breakfast at https://data-science-breakfast-in-paris.github.io/ and you can propose topics to discuss at the next breakfast https://data-science-breakfast-in-paris.github.io/feedback/. In Paris: the beginning, the growth, and the future ODS has been spreading its tentacles internationally and, along the way, converting people all over the world to faithful followers of Data Scientology cult =) Get ready to get converted in Paris. It has all started when one of the ODS activists was passing by Paris for his vacations and has organized the first breakfast on 5 January 2019 at Malongo Café and announced it on https://t.me/opendatascience. The first Parisian ODS breakfast He told me “you should continue organizing it and make it a tradition”. I am happy that I said “sure” as breakfasts attracted a lot of bright and interesting people. There were highs and lows, but overall breakfasts became a success. There were episodes of low attendance and silence, but the breakfasts have been accelerating. This statement is too qualitative for the post that has Data in its title. Of course, you want something more quantitative. And now… what you’ve been waiting for… her majesty… Data. There are 90 people in Telegram group most of whom have attended at least one breakfast and the number is growing: Breakfast is growing. The sky is the limit? Why the DS breakfast community growth was so steady? Community. It is backed by very strong international community and is announced through its powerful channels. A great format for a meetup, it is empirically proven that people across the world find DS breakfast a great format. Persistence. I learned the importance of persistence. After the first breakfast that was attended by 8 people, I have organized the second one and it was just me and Johannes Thiele. What a low point! However, it was just a start; we just moved on and it paid off. We had a period of low activity during my vacations at the end of March — the first weeks of April, but we have reemerged stronger. We had not only breakfasts, but we saw each other at other meetups and had two Afterworks. If you come to breakfasts now, it is extremely unlikely that you see just two people. Most likely, we will be around 7 (Check out the whole distribution below). Activate a hub in a social network, and the activated hub will bring new community members. Guess who should we thank the most? Right, it is the founding father, Kirill who has added the most people to Telegram group (see below). I am finishing the second in this race and the third place goes to Liubov Tupikina. Liubov has contributed a lot to community building. She is a researcher and a hub in social networks. It is a funny coincidence that she happens to have published scientific papers on Network Science. Distribution of the number of people We should thank them for community development Below, you can see who you can expect to see at Parisian Data Breakfast. Demographics that is going to evolve You are very likely to see Russians, but almost always we have non-Russian speaking people (at least one), and all of us speak English so that everyone is included. We can improve by attracting more women and non-Russian speakers and you can help here by joining us =) Although Russian speakers are around 80% (see the figure below), we stick to English, and, in total, breakfast attendants speak at least 11 languages: Russian, English, French, German, Chinese, Spanish, Latvian, Portuguese, Finnish, Kyrgyz, Arabic (forgive me if I forgot something, I surely did). We still have some work to do on diversity. If you don’t speak Russian, please come ODS is proud of its origin, but we have large international ambitions and we want to attract more non-Russian speaking people. We want you to join us! For a technical community, we are not doing badly in terms of gender equality (around 20% women) and I suspect that some of our women participants are plotting to infiltrate Paris Women in ML meetup and convert some ladies to Data Scientology. Gender balance that is going to evolve Initially, we were not very good at remembering to take group photo, but over time we have improved our photo-discipline, and finally developed a stable photo taking habit. Do you see this nice looking coffee on the bottom-right? It tastes very good as well. Keep it up guys! Let’s attract new bright people to our Data Scientology cult :) Where it is all going The future of the breakfast is bright and it depends on the collective will of the community (on you!). Some time ago, I did a questionnaire to understand people’s wants. Here are the results: I am thankful to all of you who proposed ideas either by filling the questionnaire or in private discussions. Here is the synthesis: Prepared during the week, optional list of no more than 3 DS related topics to discuss: news, papers, problems that some of us face (make an easy way for people to propose topics during the week so that people can vote and think about it before breakfasts). Done. Structure the info about the community, intra-community services DS events: talks, hackathons, etc Try once to substitute Saturday breakfast for an evening event (beers?) Career development related activities: interview problems, mock-interviews, etc The future is bright, come join us at Malongo Cafe. See you next Saturday at 10:30. If you are interested in setting up weekly ODS Breakfasts in your city, contact us at: https://t.me/malev
https://towardsdatascience.com/ods-breakfast-is-coming-to-your-city-paris-and-562b1244febd
['Ilya Prokin']
2020-02-19 18:09:46.378000+00:00
['Artificial Intelligence', 'Machine Learning', 'Paris', 'Technology', 'Data Science']
1,065
E51: A Healthcare Renaissance Man — Dr. Nisarg Patel
“I have a problem liking too many things.” Learn from Dr. Nisarg Patel, a healthcare renaissance man, about his journey into the world of medicine. Unafraid to “dive into the deep end,” Dr. Patel is a scientist and researcher of computational cancer genomics, synthetic gene circuits for engineered probiotic therapeutics, CRISPR/Cas9-mediated bacterial genome engineering, as well as applying machine learning to health policy and economics. He also worked in venture capital at Bessemer Venture Partners in their Life Sciences division. While most medical students take classes and apply it to their clinical practice, Dr. Patel did that and then also founded Memora Health, a Y Combinator funded technology company providing HIPAA-compliant patient texting. He has also written about health policy, medicine, strategy, and economics for Slate Magazine, STAT, Huffington Post, and on Medium. In this episode, he shares his perspective on shopping for healthcare, the future of primary care, where healthtech works and doesn’t work, and how he fills his insatiable urge to write. Don’t be afraid to dive into the deep end with Dr. Nisarg Patel! @nxpatel https://www.memorahealth.com/ @Memora @SherpaPod @TheBenReport @annsomerswh We highly recommend his articles on Medium: https://medium.com/@nxpatel A Sherpa’s Guide to Innovation is a proud member of the Health Podcast Network @HealthPodNet - Listen Now:
https://medium.com/a-sherpas-guide-to-innovation/e51-a-healthcare-renaissance-man-dr-nisarg-patel-35df3711804c
['Jay Gerhart']
2019-09-12 22:38:56.438000+00:00
['Genomics', 'Innovation', 'Podcast', 'Healthcare Technology', 'Healthcare']
1,066
The power of 256 in Blockchain
How to pronounce 2²⁵⁶? 2²⁵⁶ is pronounced “two to the power of two hundred and fifty six”. What does it represent? Computers represent numbers in binary. Below is an example where we have 2 bits, and each bit can represent a ‘1’ or a ‘0’ allowing us to produce 4 possible combinations: 00 = 0 01 = 1 10 = 2 11 = 3 The formula “2 to the power of the number of bits” gives us the range of numbers that we can represent. So 2² equals 4, which is true from the table above where we have 4 possible values. (In computing, we start counting from ‘0’) If we have 3 “bits” then we have 9 possible combinations (ie 2³=9): 000 = 0 001 = 1 010 = 2 011 = 3 100 = 4 101 = 5 110 = 6 101 = 7 111 = 8 If we have 256 ‘bits’, then we have 2²⁵⁶ possible combinations which allows us to represent a really large number! What does 2²⁵⁶ look like in decimal? 115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,936 OR 1.158 x 10⁷⁷ (this is 1.1 with 77 zeros after it). Source: https://defuse.ca/big-number-calculator.htm How do you pronounce 2²⁵⁶? 115 quattuorvigintillion 792 trevigintillion 89 duovigintillion 237 unvigintillion 316 vigintillion 195 novemdecillion 423 octodecillion 570 septendecillion 985 sexdecillion 8 quindecillion 687 quattuordecillion 907 tredecillion 853 duodecillion 269 undecillion 984 decillion 665 nonillion 640 octillion 564 septillion 39 sextillion 457 quintillion 584 quadrillion 7 trillion 913 billion 129 million 639 thousand 936 Source: https://www.wolframalpha.com/input/?i=2%5E256 What can I compare 2²⁵⁶ with? 2²⁵⁶ = 1.158 x 10⁷⁷ So 2²⁵⁶ is between 3.5 times the stars in the universe or a few zeros less than the number of atoms in the observable universe. Here is an interesting video on 2²⁵⁶ that a friend of mine brought to my attention (Thank Barry!) Why is 2²⁵⁶ important? 2²⁵⁶ is important is because this is the ‘perceived’ range of all possible private keys values cryptography uses in blockchains Does that mean there are 2²⁵⁶ possible private keys? Not quite. Not all numbers in the 2²⁵⁶ range are on the mathematical curve used for finding a matching public key. Bitcoin and Ethereum (and many others) use the secp256k1 elliptic curve which defines the range slightly less than 2²⁵⁶. 432420386565659656852420866394968145599 less to be exact. The range, ’n’ is actually from 0 to 0xFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFEBAAEDCE6AF48A03BBFD25E8CD0364141 (as defined in the SEC2 standard). 2²⁵⁶ = 115792089237316195423570985008687907853269984665640564039457584007913129639936 n = 115792089237316195423570985008687907852837564279074904382605163141518161494336 2²⁵⁶ — n = 432420386565659656852420866394968145599 but this is a minor point. This is like computing 1,000,000 minus 10 which can be still considered as one million. Side note: The name secp256k1 can be broken down to SEC which is the Standards for Efficient Cryptography SEC2. p means that the curve coordinates are a prime field, 256 means the prime is 256 bits long, k means it is a variant on a so-called Koblitz curve, and 1 means it is the first (and only) curve of that type in the standard. (Source: https://bitcointalk.org/index.php?topic=2699.0) But wait, there’s more! A Bitcoin address is a RIPEMD160 of the public address. This means that the length of the address is 160 bits meaning the possible keyspace, or possible range of values, is now reduced to 2¹⁶⁰ which is still a very big number. Ethereum also reduces the key length to 160 bits. Represented in decimal it is: 2¹⁶⁰ = 1.46 x 10⁴⁸ or 1461501637330902918203684832716283019655932542976. How big is this? The width of the observable universe is 8.8 x 10²⁶ meters or 8.8 x 10²⁹ millimetre. If a Bitcoin or Ethereum address represented a length of 1mm, it would span slightly under twice the length of the observable universe! 1.46 x 10⁴⁸ is actually the total amount of unique wallet addresses possible. Now here is the tricky part. We have 2²⁵⁶ possible private keys that maps to 2¹⁶⁰ possible public keys. Logic tells us that there will be more than 1 private key for every public key. In fact, all you have to do is find any one of the roughly 2⁹⁶ private keys whose corresponding public key hashes to that address. In other words, 2⁹⁶ represented how many potential PRIVATE keys would work for a single PUBLIC key. Good luck finding that though! Summary In summary, the range of possible values of private keys are very, very large. Although the possible range of private keys is slightly under 2²⁵⁶ as defined in the SEC2 standard. The private key is then hashed to 160 bits, so the possible range becomes 2¹⁶⁰ which is still a very large number, almost twice the length of the observable universe in fact! The point being that the possibility of two private keys being the same is super duper duper low. Ref: https://www.reddit.com/r/Bitcoin/comments/279l5v/an_exhaustive_look_at_private_keys_for_the/
https://medium.com/decentralize-today/the-power-of-256-in-blockchain-468aa3f395bc
[]
2019-04-10 00:01:11.684000+00:00
['Technology', 'Decentralization', 'Blockchain', 'Cryptocurrency', 'Bitcoin']
1,067
How to Shift Power From the Police to the People
How to Shift Power From the Police to the People A St. Louis proposal would give local communities more control over police use of surveillance tech. Chad Marlow, Senior Advocacy and Policy Counsel, ACLU & Sara Baker, Legislative and Policy Director, ACLU of Missouri OCTOBER 19, 2018 | 4:00 PM In St. Louis, it feels like the police are always watching. Since the launch of its “Real Time Crime Center” in 2015, the city’s embrace of surveillance technologies has often felt like a runaway train. With the highest per capita murder rate in the nation, St. Louis no doubt has a law enforcement challenge on its hands. But whether mass surveillance technologies should play a role in its policing efforts — and if so, to what degree and under what rules — is not a decision that should be made unilaterally and in secret by the police. Fortunately, an effort is underway to change that. The St. Louis Board of Aldermen will soon consider a bill that will empower local elected officials, rather than the police, to make final decisions about using surveillance technologies. St. Louis currently has more than 600 cameras watching its residents 24/7. The city has deployed a system of microphones used to listen for gunshots. It’s also deploying “Stingray” devices, which allow the police to track people’s locations using their cell phones and potentially intercept the communications of thousands of mobile phone users at a time. St. Louis Police Department’s Real Time Crime Center Since March, the police have been utilizing one of the most ominous looking surveillance devices we’ve ever seen, which, costing over $100,000 each, combines multiple surveillance cameras, a noise detector, and a license plate scanner into a single, mobile, solar-powered surveillance system. Conveniently for the police, because many of these systems were funded through grants and “public-private partnerships,” they could be acquired without using city funds and without the knowledge of the Board of Aldermen. This growing commitment to surveillance must be viewed through the lens of racially biased policing in St. Louis. While St. Louis is a vibrant, diverse city, it is also one with entrenched policing practices that target people of color. In St. Louis, Black people are 18 times more likely to arrested for marijuana possession than white people, despite using marijuana at almost the same rate. Similarly, Black drivers are 85 percent more likely to be pulled over in Missouri than white drivers. In short, if you are a Black or brown person and live in St. Louis, the police are likely watching you far more closely than your white counterparts. We have seen time and again that when the ability to monitor communities of color is escalated with new technology, the harms of disparate policing are significantly magnified. Following numerous in-depth examinations of police data, the ACLU and our partner organizations have discovered that, when mass surveillance systems are deployed by local police, they are frequently used to target communities of color. Surveillance technologies have also been frequently used by local police to monitor and intimidate political activists. This, sadly, is a practice St. Louis appears to be embracing. As Kendra Tatum, an Organization for Black Struggle organizer, told the Riverfront Times, she noticed one police surveillance camera had been pointed directly at MoKaBe’s, a frequent meeting spot for activists. Tatum felt the message was clear: “[P]olice are using surveillance cameras as an intimidation tactic against First Amendment rights.” So far, the St. Louis police have avoided any public oversight of their mass surveillance agenda. This is not how you restore community faith in the police or improve a city’s image. So let’s instead begin by accepting an unavoidable truth: From Ferguson to Stockley, St. Louis has a serious a trust deficit. This is a deficit the city’s growing use of surveillance technology is only serving to deepen. Ironically, the St. Louis Post-Dispatch recently reported that “the St. Louis Police Department has two main strategies to better deter and solve crimes plaguing the city and hurting its national image: Add police officers and expand surveillance technology.” Surely the St. Louis police understand, as many of their law enforcement colleagues do, that there is no evidence surveillance reduces crime. What St. Louis’ use of mass surveillance technologies is likely to do, however, is increase the jailing of people of color and intimidate those exercising their First Amendment rights. Such results will worsen, not improve, the city’s national image. Now, Alderman Terry Kennedy is leading efforts to pass a Community Control Over Police Surveillance (CCOPS) law in St. Louis that would shift control over the use of surveillance technologies from law enforcement to the people and their elected representatives. Nine other jurisdictions around the country have already passed similar local laws, which greatly increase public oversight in the surveillance arena. If adopted, Alderman Kennedy’s bill would move all decision-making power regarding the funding, acquisition, and use of surveillance technologies to the Board of Aldermen — even when the city is not the source of the funds. As part of the approval process, which also applies to technologies already in use, the police would have to provide the board and public with a detailed, legally enforceable plan to govern how the new surveillance tools can be used. The plan would have to include information about the proposed technology’s functionality, intended uses, and restrictions, as well as what protections will be put in place to ensure the technology is used in a constitutional and non-discriminatory manner. Annual public reporting on all of this would be required as well. The bill also provides for an open, public hearing process during which members of the public can express their views on the potential uses of the surveillance technology. If the bill is passed, all St. Louis communities will have the final say about how they are policed. That would be a major step in the right direction as St. Louis seeks to build greater trust amongst all those who call it home — and it would set an example for cities nationwide to take to heart.
https://medium.com/aclu/how-to-shift-power-from-the-police-to-the-people-69b13e4ae388
['Aclu National']
2018-10-22 14:12:48.902000+00:00
['Technology', 'Surveillance', 'Police', 'Racial Justice', 'Free Future']
1,068
Predicting Newspaper Sales with Amazon SageMaker DeepAR
A passion for print media At Sales Impact, a 100% subsidiary of Axel Springer, we are all about sales of print media. We provide regional sales activities and wholesale communication for the supervision of retail sales and the logistics involved for delivery domestic and overseas. Also we do the planning and execution of sales marketing measures, customer acquisition within the scope of direct sales, coordination of the German “Sunday market” and much more. At my team market analytics, we evaluate, advise and control what happens in the German print media market in terms of sales, logistics and advertisement. This happens at an international, national, regional, wholesale and shop level for print media such as WELT and BILD. My work as a Data Scientist mainly gravitates around the prediction of the market and the calculation of key figures in the market. Vast complexity, vast opportunities Our shop-level sales data is among our most valuable assets. Without going too deep into detail, we know the sales of some 100,000 shops for Axel Springer’s print media with some delay. Making use of this data is hugely important to understand our print media sales. But sometimes a delay in shop-level sales data is unacceptable for instance when the editorial department of the BILD wants to know how well it performed last week in terms of sales. We can solve this and other related problems with predicting the sales for these 100,000 shops! Your friend in the cloud Using Amazon Web Services, we can leverage their machine learning solution Amazon SageMaker in order to make such a prediction. But then, how would you predict some 100,000 shops without losing the information that exists among these shops? Fortunately, there is an algorithm out there that takes into account just this: the Amazon SageMaker DeepAR forecasting algorithm. But this really can be translated to any problem that has at least several hundreds of concurrent time series like e.g. with many products. The DeepAR forecasting algorithm is a supervised learning algorithm for forecasting scalar (one-dimensional) time series using recurrent neural networks (RNN) and it is astonishingly sound. You can hand this algo tens of thousands of time series possibly together with time-independent categories and additional time-dependent information for each time series, and it will train a model that then is able to predict the potential future of a time series (possibly together with its specific time-independent categories and additional time-dependent information). Predicting Germany’s major newspapers’ sales So we adopted and automated this RNN-based algorithm as follows in Figure 1 and did see some major improvements of our prediction quality compared to the singular approaches such as ARIMA or exponential smoothing that were previously in place. The original paper suggests a general improvement of accuracy of around 15% for the prediction of related time series compared to state-of-the-art methods. If you need a starting point for the implementation of DeepAR using SageMaker, I recommend this notebook from Amazon. Figure 1: Exemplary sequence of the workflow. It is amazing how much you can automate with a little help of the boto3 (the AWS SDK for Python) library. For your ease, you can find the complete boto3 workflow below (though the data pre- and post-processing part is missing). Please note that we used .json files as the input data type. This is how a normal run can look like on my laptop: Figure 2: Exemplary workflow as log output. Implications to our business With the more accurate prediction of sales, we are able to give even more accurate projections to the editorial departments. Also, we work on taking into account these sales predictions to improve logistical key figures we provide to our business partners. Summary In this article, we write about predicting newspaper sales using Amazon SageMaker DeepAR. After a short company and team introduction, we give a shallow description of our shop-level sales data and the related problem. We then describe how DeepAR is a suited algorithm for this problem, followed by an overview of our solution together with some sample code to reproduce our solution. Finally, we claim that such a prediction with DeepAR is beneficial to our business. About the author: Justin Neumann is a Data Scientist and MS in Predictive Analytics helping to transform companies into analytical competitors. He works at Sales Impact, a subsidiary of Axel Springer.
https://medium.com/axel-springer-tech/predicting-newspaper-sales-with-amazon-sagemaker-deepar-dffde3af4b20
['Justin Neumann']
2020-02-19 12:44:29.394000+00:00
['Machine Learning', 'Technology', 'Artificial Intelligence', 'Time Series Analysis', 'Deep Learning']
1,069
NASA’s New Astronauts, The Turtles
NASA’s 22nd astronaut class, the Turtles, is posing at the Johnson Space Center in Houston, Texas, 10 Jan, 2020. NASA’s Newest Astronauts Jessica Watkins, a 31-year-old, worked at NASA before becoming an astronaut. She’s from Lafayette, Colorado, and earned a bachelor’s degree in geological and environmental sciences as well as a doctorate in geology. Watkins worked at NASA’s Ames Research Center in Silicon Valley and at NASA’s Jet Propulsion Laboratory in Pasadena. She’s also worked on the Curiosity rover during her time at the California Institute of Technology. Jennifer Sidey-Gibbons, a 31-year-old, is the second Canadian Space Agency candidate, and an assistant professor in combustion at the University of Cambridge. She is from Calgary, Alberta and earned a bachelor’s degree in mechanical engineering as well as a doctorate in engineering. While in school, she worked on research regarding flame propagation in microgravity. Dr. Francisco Rubio, a 44-year-old, is a US Army lieutenant colonel from Miami, Florida, with a bachelor’s degree in international relations and a doctorate of medicine. He previously served as a surgeon and a Blackhawk helicopter pilot during combat. Loral O’Hara, a 36-year-old, is a research engineer from Houston, Texas, with a bachelor’s degree in aerospace engineering and a master’s degree in aeronautics and astronautics. O’Hara engineered, tested and operated deep-sea submersibles and robots at Woods Hole Oceanographic Institution in Massachusetts. Jasmin Moghbeli, a 36-year-old, is a US Marine Corps major from Baldwin, New York, with a bachelor’s degree in aerospace engineering with information technology and a master’s degree in aerospace engineering. She also graduated from the US Naval Test Pilot School and tested H-1 helicopters. Joshua Kutryk, a 37-year-old, is one of the Canadian Space Agency candidates and a Royal Canadian Air Force lieutenant colonel from Beauvalion, Alberta. Kutryk earned a bachelor’s degree in mechanical engineering and master’s degrees in space studies, defense studies and flight test engineering. He was previously a fighter pilot and experimental test pilot. Dr. Jonny Kim, a 36-year-old, is a US Navy lieutenant and resident physician in emergency medicine from Los Angeles. Kim received a bachelor’s degree in mathematics and a doctorate of medicine. Kim was also a Navy SEAL who served during more than 100 combat missions, earning a Silver Star and Bronze Star with Combat V (with valor). Warren Hoburg, a 34-year-old, is a commercial pilot from Pittsburgh with a bachelor’s degree in aeronautics and astronautics and a doctorate in electrical engineering and computer science. He previously served the Bay Area Mountain Rescue Unit and Yosemite Search and Rescue. Hoburg was also an assistant professor of aeronautics and astronautics at MIT. Bob Hines, a 44-year-old, is a US Air Force lieutenant colonel from Harrisburg, Pennsylvania, with a bachelor’s degree in aerospace engineering and a master’s degree in flight test engineering. He was also a developmental test pilot on F-15 models while working on his master’s in aerospace engineering. Hines was also deployed during Operations Enduring Freedom and Iraqi Freedom. Matthew Dominick, a 38-year-old, is a US Navy lieutenant commander from Wheat Ridge, Colorado, with a bachelor’s degree in electrical engineering and a master’s degree in systems engineering. Like Chari, he also graduated from the US Naval Test Pilot School. Raja Chari, a 42-year-old, is a US Air Force colonel from Cedar Falls, Iowa, with bachelor’s degrees in astronautical engineering and engineering science as well as a master’s degree in aeronautics and astronautics. Chari also graduated from the US Naval Test Pilot School. Zena Cardman, a 32-year-old, is from Williamsburg, Virginia, and holds a bachelor’s degree in biology and master’s degree in marine sciences from the University of North Carolina, Chapel Hill. Cardman’s research includes microorganisms found in environments like caves or even deep sea sediments, and she has conducted several expeditions to Antarctica. Kayla Barron, a 32-year-old, is a US Navy lieutenant from Richland, Washington, with a bachelor’s degree in systems engineering and a master’s degree in nuclear engineering. Barron also served as a submarine war officer. Read More.
https://medium.com/@thepexels/nasas-new-astronauts-the-turtles-e8dec67a1cf7
['The Pexels']
2020-01-12 14:04:42.039000+00:00
['NASA', 'Space', 'Space Exploration', 'Nasa News', 'Technology']
1,070
Write code without if-else
Write code without if-else statement or art of avoiding if statement Not a long time ago I was looking for the job and once I was asked to create a function which returns true or false if the input is a number or not, but without using if and I did it. In the beginning, I thought it will be easy peasy lemon squeezy but then I realised it's not. This task made me think different. Seriously it's an exciting way to think out of the box and find new ways to solve old rusty problems that always was done in the same way. I'm not saying that old is bad or if statement is bad but if you see if in if in if it can be terrifying, unreadable, and confusing. if nesting make code complex and this can cause issues in future, the complexity of code measured by Cyclomatic complexity it's calculated by measuring the number of independent paths through a source code, and every time we using if statement we creating a new path. I prepared little examples that can be used in a job interview or can be reused anywhere. So let's look at some ways to keep away from if Example 1: Check if the value is number First, let's try to make a function isNum which I already talked about above (from my interview). It's pretty easy I think you should come up with something like this. This solution is really easy and straight forward. It can be lightened and made in one line and still be readable. Nothing to say about it. Example 2: Find a celebrity by name Now, let's have something more complicated. Let's make a function which takes a name and returns the name of the celebrity with the provided first name. In this example, you see how large and unpleasant becomes a simple function if it is overloaded with if statements (same with switch/case ). Example in betterSelector has many pros like readability, ease of expanding and supporting but have one serious flaw, that I have to point. If you provide in your selectorObj key that doesn't exist there, you will have undefined in the result and this can cause many unpredictable issues in your project. Always handle not provided values. Example 3: Filter odd numbers Next example is naive, we will filter all odd numbers and return in result array of even numbers This example is very easy and obvious, filter function just checking if something left after dividing on 2 if it is, then it's odd and must not be included to result. Example 4: Check if the object has nested value Sometimes we have deeply nested objects which can contain data that we need. In these cases, we use something similar to these examples. Here in getDataOld we just checking if the path exists in our current object, then we proceed and repeat until we get our target value or return false if one of the condition is not worked. Its really not reusable and scaleable solution but sometimes you can find something like this even in production code! To fix it I created other function, getDataNew it accepts 2 arguments, first is paths array — just string array which represents keys of an object which lead to nested target field in the object. Second is the object itself. In this function, I used for loop to go through fields of the object and saved the previous path in a variable with the same name and overwrite data with current path and eventually with the value of the target field. Summary. In the examples above, we see how it's easy to avoid if nesting and create expandable alternatives. If you don't want to create all this function you can always use some Matching pattern library from NPM. Remember there no such thing as irreplaceable if. But not all if`s are must be replaced. Did I convince you not to use if everywhere? What pros and cons do you see, please share with us. Anyway, I hope this article was useful to you. Thanks for reading.
https://medium.com/front-end-weekly/write-code-without-if-else-statement-or-art-of-avoiding-if-statement-4e44f0248c25
['Jhon Black']
2020-12-28 16:41:55.171000+00:00
['Technology', 'JavaScript', 'Software Development', 'Patterns', 'Programming']
1,071
Greetly Announces First No-Touch Visitor Management System
Greetly, the only fully-customizable visitor management system (VMS) in the world, is now offering all existing and future clients the ability to leverage No-touch visitor management technology to ensure safer visitor check-ins and exits within all types of buildings. Greetly’s No-touch visitor check-in is the first technology of its kind in the world. The offering was first announced in April 2020 with an assurance the solution would be ready for clients to utilize by June of 2020. Greetly’s existing clients will all have access to the new no-touch feature at no additional cost. In addition, the new feature is embedded into all new client plans. “It was critical at all levels of our organization from research and development to business development and senior leadership that our No-touch VMS solution be available to all new and existing clients at no additional cost,” said Dave Milliken, Founder of Greetly. “In these uncertain times when people are concerned for their health and safety, we made a very large internal effort to fast forward our product development to ensure this new tool would be available by June 2020. We made this promise and I am immensely proud of our entire team who worked tirelessly to ensure we could launch as planned and have this tool available today.” Visitors are now able to initiate and complete the check-in process using their own smartphones without having to download any additional apps or touch any third-party screens. Office Evolution, a Greetly client, is utilizing the new No-touch offering throughout its network of coworking locations. Office Evolution is the largest and fastest growing coworking franchisor in the U.S. with 70 locations in over 20 states. As the brand is poised to expand its network this year, the need to provide an additional layer of precaution for its members was critical. “The health and safety of our member sin our workspaces is our top priority,” said William Edmundson, Office Evolution, COO and GWA Board Member. “Our members have continued to work from coworking spaces during the pandemic because we quickly implemented enhanced protocols and systems to ensure their safety. The way we work has forever changed and Greetly’s No-touch technology complements our goal of providing a healthy environment for our members.” Many additional Greetly’s clients are utilizing the new No-touch offering including: Randstad, Sweden Vita Coco Cambridge Innovation Center “Meeting the growing needs of those seeking healthy buildings across the world should immediately become the main goal of the entire VMS marketplace, especially as our global community works united to end the devastating COVID-19 virus. I believe that every building should begin with a no-touch visitor management system. This way guests, residents, and employees can enter their home or workplace hands-free and have a deeper peace of mind in terms of their personal safety,” said Milliken. “Since 2019, our research and development team has been working on this tool and I am pleased to see a seamless usage transition by so many of our clients.” Traditional visitor management systems run off tablet computers in a kiosk which causes potential risk as visitors are required to touch the kiosk which may have been used by someone carrying corona virus or another illness. In addition, cleaning a kiosk is not entirely practical and cannot be guaranteed after each and every use. The welcome screen of the Greetly app will display a QR code or the ability to text a code, both unique to the specific workplace to initiate the sign-in process. Greetly’s technology ensures visitors are on location before allowing them to begin the check-in process. Visitors will then be able to complete the entire check-in process including finding and selecting their host, entering required information about themselves, taking a photo and/or eSigning legal documents. When the process is complete, hosts will be notified. All of this works exactly as if the visitor had used a kiosk. Greetly’s no-touch feature is easy to download and use within minutes of accessing the company’s app and is offered with 24-HR a day customer support. The new offering is ideal for visitor check-in and check-out for several workplace environments such as secured facilities, co-working spaces, office buildings, staffing agencies, casinos and resorts, warehouses, military posts and many more. For those interested in learning more about Greetly’s new no-touch solution, please contact the organization here to schedule a free personal webinar. About Greetly Greetlyis the only fully customizable visitor management system serving enterprise and SMB clients across the globe. Greetly’s digital receptionist app manages visiting customers, vendors, interview candidates, deliveries, facility tours, scheduled entries and exits, and more. This modernization to office reception capabilities results in significant time and money savings for Greetly clients. The technology — which provides instant visitor notifications, collects e-signatures, and prints visitor badges — can be branded and customized to the unique needs of each work environment. Greetly’s solutions are used by several brands including DHL, the Dallas Cowboys, Office Evolution, Randstad, and the United States Air Force. The company was named in 2020 as a Key Company within the visitor management space by QY Research. To learn more and start a free trial, click here. About Office Evolution Founded in 2003 and franchising since 2012, the Colorado-based company is the largest and fastest growing coworking franchisor in the United States. With 70 locations open and nearly 80 in various stages of development, Office Evolution is poised for growth as the demand for suburban workspace increases. The brand’s model fills a niche for suburban-based workers looking for a professional environment to get their work done. Office Evolution continues to lead the workplace transformation that is projected to see nearly 30 percent of all office space become shared office space by 2030, according to a JLL report. Office Evolution currently operating in 24 states, including Arizona, California, Colorado, Connecticut, Florida, Georgia, Idaho, Illinois, Indiana, Iowa, Kansas, Massachusetts, Michigan, New Jersey, New York, North Carolina, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Utah and Virginia. For more information about Office Evolution, please visit https://www.officeevolution.com/.
https://medium.com/@greetly/greetly-announces-first-no-touch-visitor-management-system-6983625c561
['Greetly', 'Digital Receptionist', 'Digital Mailroom']
2020-12-23 10:06:14.805000+00:00
['Coworking', 'Covid 19', 'Technology', 'Health', 'Leadership']
1,072
Getting started with Gherkin and testing Flutter App in simple steps
Testing Sample Application The simplest test that can be written to test the Home Page and Person List Page could be about verifying whether all the persons displayed on Person List Page are proper. In Gherkin style, you would write that test as (for simplicity, I’m just considering first 3 personas) Given Below Personas exists | name | | Luke Skywalker | | C-3PO | | R2-D2 | When I navigate to Persona List Then I See following Personas | name | | Luke Skywalker | | C-3PO | | R2-D2 | Writing Some Code The above is just a specification part of your test so far. We still don’t have an actual implementation of the steps that we have written above. So let’s get started with implementing these steps in Flutter. While exploring various options for Gherkin style of writing specification implementation, we came across a very good Flutter library flutter_gherkin (by Jon Samwell) that helps us write these steps. On top of that, we created our flutter_gherkin_addons (by Technogise Pvt. Ltd.) repository to extend and simplify the behavior of this package. Throughout this blog, we will use these two packages and their features. Add above mentioned two packages in dev_dependencies section of your Flutter application as below: dev_dependencies: flutter_gherkin: ^1.0.4 flutter_gherkin_addons: ^0.1.4 2. Create a directory test_driver in the root of your application. 3. Create directory features in test_driver directory and add file ‘homepage.feature’ in this with below contents: Given Below Personas exists | name | | Luke Skywalker | | C-3PO | | R2-D2 | When I navigate to Persona List Then I See following Personas | name | | Luke Skywalker | | C-3PO | | R2-D2 | 4. Create a file ‘config.yaml’ inside this directory with the following contents: reporting: true stubbing: true 5. Now create file app.dart in the same directory with the following contents: import 'dart:io'; import 'package:persona/main.dart'; <This is your applications main file> import 'package:flutter/widgets.dart'; import 'package:flutter_driver/driver_extension.dart'; void main() { enableFlutterDriverExtension(); runApp(PersonaApp('local')); <This is place where you initialize application> } 6. Create another file app_test.dart in the same folder with below contents: import 'dart:async'; import 'package:flutter_gherkin_addons/wrapper.dart'; import './steps/details_page_steps.dart'; import './steps/home_page_steps.dart'; import 'steps/mocking_steps.dart'; Future<void> main() async { return TestRuntime.start( [ //Step definitions ] ); } Let us understand what the above code does. app.dart file will be used by the flutter_gherkin package for initializing your application when you start any test. Remember that this will be done automatically. app_test.dart file bootstraps your Gherkin environment with the main code that calls TestRuntime.start which accepts the array with step definitions. Now, let’s write some step definitions. Create directory steps in the test_driver directory. Create file home_page_steps.dart in this directory. Write the following code in it: GenericGiven1 givenFollowingUserExists(){ return given1("Below Persona exists",(context,Table dataTable) async { // Code for ingesting data in application }); } 4. And add a call to an above function in app_test.dart as below import 'dart:async'; import 'package:flutter_gherkin_addons/wrapper.dart'; import './steps/details_page_steps.dart'; import './steps/home_page_steps.dart'; import 'steps/mocking_steps.dart'; Future<void> main() async { return TestRuntime.start( [ givenFollowingUserExists(); ] ); } Let us understand what we did. We created a step definition Below Persona exists and let our framework know about this given step. Notice that we used the ‘given1’ function. This is a wrapper DSL function to create Gherkin steps in a single go. The ‘1’ in ‘given1’ here stands for step with 1 parameter. Similarly, there are given2, given3… given5 wrappers available. Similarly, you can create other step definitions like below: 1. I navigate to Persona List GenericWhen whenINavigateToPersonaList(){ return when("I navigate to Persona List", (context) async { final locator = find.byValueKey("persona"); FlutterDriverUtils.tap(context.world.driver, locator,timeout: context.timeout); }); } 2. I See following Personas GenericThen1 thenISeeFollowing(){ return then1("I See following Personas",(context,Table dataTable) async { int index=0; for (var row in dataTable.rows) { final locator = find.byValueKey("card-"+index.toString()); index = index+1; context.expectMatch(await FlutterDriverUtils.getText(context.world.driver, locator, timeout:context.timeout), row.columns.elementAt(0)); } }); } To run above test just run file app_test.dart from the context menu (make sure you have at least one virtual device running) Some More Action With Data Mocking If you noticed throughout this blog, we have been testing our application end to end. In case you just want to test the UI of your application and a mock API call, flutter_gherkin_addon ships with an inbuilt stubbing component that runs as a standalone server on port 8081. Let’s now prepare our data in the “Below Persona exists” step. We would rewrite the step as: GenericGiven1 givenFollowingUserExists(){ return given1("Below Persona exists",(context,Table dataTable) async { var persons=[]; for (var row in dataTable.rows) { persons.add(Person( name: row.columns.elementAt(0) )); } TestRuntime.addStub(StubFor.staticHttpGet("/people",Response(200,json.encode({ "results":persons }),headers: {"Content-Type":"application/json"}))); }); } The above step will make available data on http://localhost:8081/people (in iOS) and http://10.0.2.2:8081/people (Android). You need to inject this URL in your application when you initialize the application (in the app.dart). This is optional and you can disable mocking by setting stubbing=false in config.yaml Closing thoughts With recent developments in the software world, software products are becoming more and more complex in terms of the system as a whole. Therefore, it becomes crucial how you define the behavior of your product and how you test it against that behavior. The software community also acknowledges this shift and is actively coming up with its solutions to solve these problems. As a strong advocate of, and active contributor to, open-source, Technogise is also solving this problem with its open-source library flutter_gherkin_addons. Please feel free to hack around and share your feedback and suggestions, which can help us improve upon this library.
https://medium.com/technogise/getting-started-with-gherkin-and-testing-flutter-app-in-simple-steps-ea6e769bce2b
['Atmaram Naik']
2020-05-17 07:30:31.508000+00:00
['Flutter', 'Bdd Testing', 'Dart', 'Gherkin', 'Technology']
1,073
Dome or Bullet Security Camera?
Security camera is the most needed system for monitoring your home, shop, office and outdoor. So you can not think without a security camera system. In this way you need different types of security camera for different purpose. You need to decided what you want dome or bullet security camera. Dome is the best for home and office monitoring and bullet is the perfect for outdoor and also for small business. Bullet security camera are mostly high quality and durable but Dome is the not durable but color vision. Now i will discuss about both dome and bullet security camera with their pros and cons. Dome Security Camera: The Pros Dome cameras get their name from their dome-shaped structure. These security cameras are designed to face up to all elements, both inside and out of doors . Their construction allows for the camera to figure even in low-light or no-light settings thanks to the built-in infrared LEDs. All cameras have the power to send video signals over the web so an owner can access the footage at any time. Below are some advantages of putting in dome security cameras. Bullet Security Cameras: The Pros Bullet cameras are named for his or her distinct cylindrical shape, resembling a bullet. These cameras act as a clear deterrent and research has shown that the presence of bullet cameras makes a property less desirable to a criminal. These security cameras operate both indoors and outdoors and have features like alittle lip on the tip of the lens to guard against any glare or other weather. Below are the most advantages of putting in a bullet camera. Long range vision Bullet camera range is far longer than other cameras which positions them as ideal options for giant areas like parking lots or backyards. Their field of view is fairly narrow, yet the form of the camera allows for a bigger lens on the bullet camera than a dome camera. Long distance viewing is that the bullet cameras main strength. The camera’s narrow viewing angle allows the camera to ascertain clearly at farther distances, almost like how a pair of binoculars work. Often, the cameras are good at capturing clear images of individuals and license plates at great distances, making it easy for somebody to spot such things if they appear back through the footage. Variety and adaptability The dome security camera offers variety in terms of shape, size, and angle. Some cameras are full of night-sight while others have pan-tilt-zoom features and motion sensors. These cameras are often placed practically anywhere, from cross-roads to parking lots, to someone’s backyard. One huge point of dome cameras is their massive area coverage options. The wide angle of the dome camera provides a maximized viewing area and, if equipped with the right sensors, can act as panoramic surveillance cameras. Appearance Both cameras are named after their appearance. A domed camera may be a small camera mounted to the ceiling or beneath your exterior eaves 1. it’s a domed cover that goes over the camera portion, hiding it from view and protecting it. Because dome cameras are discrete and blend in well with their surroundings, they’ll be less visible and are more frequently used indoors. Bullet cameras get their name because they appear tons sort of a rifle bullet. Some smaller models may look more sort of a tube of lipstick and should be mentioned as lipstick cameras. They mount to the wall by a base and may be positioned. they’re more visible and, therefore, may discourage theft or vandalism. Rotation, Angle and Range Both bullet and dome cameras have a hard and fast position, meaning that once they point in one direction that’s what they will see. However, a bullet camera are often easily repositioned to aim differently . Domed cameras must be began the ceiling and repositioned then reinstalled. This makes bullet cameras somewhat more versatile. Domed cameras are restricted, which may make them better for close quarters but not as effective for wide or long ranges. Bullet cameras have a wider and longer range, which makes them ideal for distance and exterior views. They also often have a lip that extends over the lens of the camera to assist protect it from becoming dirty and clouded. Night Vision Both security camera has night vision option but bullet security camera has powerful when you want night vision. Many people are looking to buy bullet camera for night vision options only. Features Dome camera has more features then bullet security camera. A dome camera has color vision, battery rechargeable, solar panel, and wifi. You can see the most dedicated blog post here for dome vs bullet camera. You will get a perfect idea about dome or bullet camera. In the end, i will tell you to buy dome for home or indoor use and bullet security camera for outdoor and small business or office use.
https://medium.com/@getlockers1/dome-or-bullet-security-camera-6d77a1b03c34
['Smart Locks']
2020-06-02 04:08:02.138000+00:00
['Technology', 'Secure', 'Security Camera', 'Safety', 'Security Camera System']
1,074
Measuring what Matters: Resources to help you track progress towards DEI
At Cowboy Ventures, we know diverse teams perform better and we are excited about the opportunity to make our workplaces more inclusive for everyone. Earlier this year, we took a step back as a team to think about how we could better support portfolio companies as they work to create teams and workplace cultures to be proud of. We know you can’t manage what you don’t measure, so we worked with HR and DEI experts to create template slides for portfolio companies to use at board meetings to identify, track, and report metrics related to DEI. We were so excited when we started to see the slides show up into board decks and create a space for conversations that might not otherwise happen. We’re now even more excited to share these resources with any company that wants to get started or improve the way they are tracking their progress. Here’s a deck with all of the resources. In it, you will find: Template Google surveys to copy & modify to collect data Template slides to share DEI metrics (one for early-stage companies and one for mid-stage+ companies) with your board, at staff meetings and/or all-hands Recommendations for additional DEI metrics to track Resources for additional tools and consulting services Also, if you’re looking to automate a lot of this data collection and presentation, we worked with Cowboy portfolio company Charthop to create a DEI product that does a lot of the work for you! Please feel free to use and share these resources. If you have any feedback, don’t hesitate to reach out to [email protected]. And if you are a CEO that would like to commit to adding DEI metrics to your regular board materials, we ask you please fill out this form. We will add you to a growing list of CEOs who are committed to making the tech ecosystem more equitable. We also hope this will help your company attract great talent, as candidates will know you’re committed to building a more inclusive workplace. (We will add names to the bottom of this article, as we receive responses) There is a lot of progress to be made when it comes to diversity and inclusion across the entire tech ecosystem, including funders, founders, and the employees that make up this growing sector. If you’re interested in learning more, please check out All Raise and also our resources to help new investors break into venture capital. Lastly, a very special thank you to Aubrey Blanche, Merritt Anderson, Bari Williams, Emily Kramer, Alicia Burt, and many others who reviewed and provided feedback on these resources. At your service, Jomayra & Aileen on behalf of Team Cowboy.
https://medium.com/cowboy-ventures/measuring-what-matters-resources-to-help-you-track-progress-towards-dei-7ddfca71ccfe
['Cowboy Ventures']
2020-12-14 14:28:34.431000+00:00
['Diversity And Inclusion', 'Diversity', 'Technology', 'Venture Capital']
1,075
How to Create an Effective Online Church Budget
If you are a Pastor or even an active member of a church, you should know by now that proper financial planning is essential. Hence, proper planning and monitoring your budget is the key to identifying wasteful expenses. Doing this will help you adjust better to the different fluctuations in giving, and eventually help you set feasible financial goals, even for your online church. The clarity that creating a strong budget gives will always have a ripple effect. Sometimes, some pastors are usually in a fix about getting started in creating an online church budget. And that’s okay. We have highlighted some starting points to make your budget planning easier. These tips not only work for your online church, but they can also work for your physical church. 1. Assess Previous Data It is one of the best places to start, and it would help if you have good records. Carefully look over the previous few years of financial data to spot patterns in income and expenditure. If you have data for multiple years, you might want to study them; it helps you take a better perspective. What areas have you expended the most funds? What’s the trend of your income like over the years? Do you detect an upward or downward trend? Either way, by how much? Did the percentages spent on different items and projects reflect your ministry priorities? Which areas need to be cut down or eliminated? Are there things you should kickstart? These critical questions will guide you on how to set budget goals for the year. If you do not already use Church management software, you may find it challenging to obtain exact data on your church income; givings, and donations online. With ChurchPad, you can always get up-to-date or yearly financial data, set up recurring donations, and manage contribution types — all of which are essential for growing your church. 2. Set Goals For the Year Ahead To maintain a meaningful, active, and vibrant budget, you need to set goals. Write down your goals and work towards hitting them. Once you have set goals, it’s easy to monitor your online church budget. Identify what costs would hinder you from hitting your goal and reduce those costs. 3. Auditing Your Income Once you have identified your income sources, you can begin to look at which ones are static and which ones you can work on. You should also make a note of any income source that may be temporary. For example, if you have facilities that could be rented by another church or organization, that could constitute a source of income for your church. 4. Auditing Your Budget After you’ve assessed previous data and made your inferences, you are probably now able to make the right decisions in planning your budget. You have also been able to see areas that could be streamlined or improved upon. An important thing is working your budget and ensuring that your income and expenses are still aligning. Start by plugging in numbers into the items until the expenditures match your budgeted income. If you can’t make the numbers work, you may need to either cut out some of the expenses or raise your estimated amount. It is recommended that you go over the items in your budget, at least once in a quarter, and see how you’re doing. Have you had some unplanned expenses? How did it affect the budgets for some other items? Or did you offset it with budget decreases elsewhere? Were you able to achieve the income target set? Or do you need to cut your budget back a bit? With ChurchPad, developing both short- and long-term budgets couldn’t get easier as you can create unlimited contribution categories to track different types of giving. Monitoring short-term budgets such as your monthly budget give you insight into ongoing expenditure and funds management. It helps you identify places where you’re spending unnecessarily and those you need to allocate more funds. Also, investing in software that can maximize your relationships and communications with your congregation can be helpful. Pulling a church budget is quite easy, especially if you’re using a user-friendly Church management software for your online church activities. With ChurchPad, you can pull reports quickly and easily to see how you are doing. Visit www.churchpad.com to find out more about how ChurchPad can help your church.
https://medium.com/@churchpad/how-to-create-an-effective-online-church-budget-50687f76bcb2
[]
2020-09-03 13:44:34.327000+00:00
['Giving', 'Church Technology', 'Church Management']
1,076
Getting Started With Jupyter Notebooks
Jupyter Notebooks Logo Throughout some of my classes and work environments, I have found Jupyter notebooks to be particularly helpful in laying out my code and ideas in Python. I figured writing an article on how to get setup might bring new techniques and ideas into the world for someone, so here it is! Jupyter notebooks are easy and fun to use, and they look pretty nice as well. Setup is easy and quick, but honing your setup to have specific qualities makes up most of the time after the initial installation. Jupyter Notebook and its flexible interface extends the notebook beyond code to visualization, multimedia, collaboration, and more. In addition to running your code, it stores code and output, together with markdown notes, in an editable document called a notebook. When you save it, this is sent from your browser to the notebook server, which saves it on disk as a JSON file with a .ipynb extension. This article will be broken down into just a few short sections. Installation Setup Use Let’s just jump right in! Installation First thing is first, we need to download and install Jupyter to run our notebooks for our personal use. To begin, you more than likely have pip installed, and that is what we will use to install Jupyter. First, let’s upgrade pip to get the latest dependencies/plugins. pip3 install --upgrade pip Upgrading pip Next, we will install Jupyter in a similar way. pip3 install jupyter Jupyter pip install Setup Once finished with the pip install, we can start the Jupyter server for our use. jupyter notebook command prompt for Jupyter As shown above, Jupyter is ran locally on your host. We can then navigate to any of the URLs shown when you start the server. Jupyter login verification By default, my server started with the token code enabled so I had to copy my code from my command prompt where my server was started to access. Once entered and you click login, you are presented with a page like this: Jupyter dashboard Use Now that we have it setup and installed, let’s use it! There’s numerous use cases for a tool like Jupyter Notebooks. You can use it normally like you would any python script for a conventional python use. You can also use pip to install things mid notebook as well if something you need is not installed. pip install mid-notebook You can even do fun stuff like plotting that is similar to MatLab or R, if that’s your thing. These are just a few uses for your notebooks. You can also use them to take notes in class, save examples from tutorials, and much more. I use them day to day in my job too! I may update this page continually as I find more novel uses for the notebooks. Thanks for reading!
https://medium.com/swlh/getting-started-with-jupyter-notebooks-6ac0593fb73d
['Jacob Latonis']
2020-08-18 01:19:07.940000+00:00
['Python', 'Programming', 'Technology', 'Information Technology', 'How To']
1,077
An Insight into Hyperledger Tools
What’s great? -The Hyperledger umbrella project offers various blockchain-based frameworks and tools for enterprises. -If you’re planning a Hyperledger-based project, it always pays off to understand what they are first. -We’ll be looking at some of the tools available with Hyperledger and what they can do. So, let’s get started! Hyperledger Avalon -Avalon is a ledger independent implementation of the TrustedCompute Specifications published by the Enterprise Ethereum Alliance. -It aims to enable the secure movement of blockchain processing off the main chain to dedicated computing resources. -Avalon is designed to help developers gain the benefits of computational trust and mitigate its drawbacks. Hyperledger Cactus -Hyperledger Cactus is an Apache V2-licensed open source software development kit (SDK). -It is designed and architected to help maximize pluggability so that anyone can use it to connect any DLT to others. -This can also be done by implementing a plugin. -This pluggable architecture helps enable the execution of ledger operations across multiple blockchain ledgers. Hyperledger Caliper -Caliper is a benchmark tool for blockchain frameworks and relies on functioning blockchain implementation as the benchmarking target. -Caliper is not intended to make judgments and will not publish benchmark results, but provide benchmark tools for users. -It will produce reports containing a number of performance indicators, such as TPS, transaction latency, resource utilization, etc. -The key component is the adaptation layer, which is introduced to integrate multiple blockchain solutions into the Caliper framework. Hyperledger Cello -Cello serves as the operational dashboard for Blockchain, which reduces the effort required for creating, managing and using blockchains. -Cello is primarily a tool for DevOps, or the connection between development teams and production -Common lifecycle and deployment tasks include starting, stopping, and deleting a blockchain, deploying new nodes, abstracting the blockchain to run on local machines, etc. Hyperledger Explorer -Explorer is a Web application tool to view or invoke transactions and other information stored in a Hyperledger blockchain deployment. -The explorer is a useful tool in finding and understanding otherwise machine-readable data stored as encrypted ledger entries. -The tool also provides enterprise-level visualizations that can help decision makers, through intuitive graphs, charts, and tables. -It is to be used specifically on deployments of blockchains created using the Hyperledger umbrella. Thank You For Reading
https://medium.com/@blockchainx-tech/an-insight-into-hyperledger-tools-3b79db58119
[]
2020-12-18 08:04:43.813000+00:00
['Blockchain', 'Blockchain Development', 'Blockchain Technology', 'Blockchain Application']
1,078
A Bright Future for Moore’s Law
By Robert Chau, Senior Fellow and Director, Components Research at Intel Originally posted on December 9, 2019 Moore’s Law has been the guiding principle for the semiconductor industry for more than fifty years. For thirty of those years, I have had the privilege of working in Intel’s technology development organization — giving me a bird’s eye view of the breakthrough innovations that have enabled continued improvements in transistor density, performance, and energy efficiency. While there are many voices today predicting the imminent demise of Moore’s Law, I couldn’t disagree more. I believe the future is brighter than ever, with more innovative technology options in the pipeline now than I have seen at any point in my career. At its simplest level, Moore’s Law refers to a doubling of transistors on a chip with each process generation. Over the years, this exponential increase in transistor density has remained remarkably consistent, but two things have changed along the way: how we achieve these density increases and the benefits we derive at the product level. Whether its higher frequencies and lower power consumption or more functionality integrated on a chip, Moore’s Law has adapted and evolved to meet the demands of every technology generation from mainframes to mobile phones. This evolution will continue as we move into a new era of unlimited data and artificial intelligence. What innovations will drive Moore’s Law over the next decade? I believe they can collectively be categorized into two broad areas: monolithic scaling and system scaling. Monolithic scaling might be referred to as “classic” Moore’s Law scaling, with a focus on reducing transistor feature sizes and operating voltages while increasing transistor performance. System scaling improvements are the gains that help us incorporate new types of heterogeneous processors via advances in chiplets, packaging, and high-bandwidth chip-to-chip interconnect technologies. Intel is investing heavily in research to support both vectors. At this week’s annual gathering of the world’s top semiconductor process technologists — IEDM in San Francisco — Intel engineers are presenting nearly twenty papers demonstrating groundbreaking work to advance Moore’s Law for the next generation. What follows is a high-level summary of these exciting technology options. Monolithic Scaling: A New Dimension Current Intel processors are based on a transistor structure known as FinFET, in which the gate surrounds the fin-shaped channel on three sides. As Intel’s process nodes have advanced, we have made the fins taller and narrower, allowing us to reduce the number of fins necessary to achieve a given level of performance. While FinFETs still have plenty of life, at some point in the near future the industry will transition to a new type of transistor architecture: Gate-All-Around (GAA) FETs, in which the gate wraps around the channel on all sides. There are multiple potential implementations of GAAFETs, from skinny nanowires to wide nanoribbons. What they share in common is the ability to pack more high-performance transistors into a given area, thus reducing the width of the standard cells our designers use to build new processors. In addition to this new transistor architecture, another way to drive cell area scaling is through vertical stacking of transistor devices. Modern semiconductors are built from complementary pairs of both negatively and positively charged transistors called NMOS and PMOS. The height of a standard cell can be significantly decreased through monolithic stacking of a NMOS device on top of a PMOS device, or vice versa. This can be accomplished by stacking FinFETs, GAAFETs, or even a combination of both. Monolithic stacking of transistor devices doesn’t just deliver improved density. It is a powerful way to integrate multiple materials on a single silicon substrate, providing significantly improved performance and opening the door to entirely new classes of products with unique functionality. At this year’s IEDM, Intel engineers are demonstrating two innovative approaches to monolithic integration. In the first example, our team has stacked a germanium-based GAAFET PMOS device layer on top of a more traditional silicon FinFET NMOS device layer. Germanium is an element with many similar properties as silicon, but it has found limited use in semiconductor chips because it can be challenging to manufacture alongside silicon. However, because of the structure of its crystal lattice, using germanium in the transistor channel can significantly improve the switching speeds of a PMOS device, which typically operates more slowly than its complementary NMOS device. Monolithic processing allowed us to fabricate a germanium-based PMOS device with record-setting performance, and then stack it on top of a silicon-based NMOS device. In the second example, another team has used monolithic integration to stack a standard silicon PMOS device layer on top of a NMOS device layer that leverages a channel made from gallium-nitride — a compound that is widely recognized as the best material for power delivery and radio frequency (RF) applications, such as next-generation 5G front-end modules. These types of chips are currently built as standalone units, but this new technique could allow for full integration of RF functionality with standard silicon-based processors. System Scaling: Beyond the Transistor Continuing to drive Moore’s Law scaling requires integrating improvements from every aspect of the manufacturing process, not just at the transistor level. For decades, many in the industry viewed packaging as simply the final manufacturing step — the place where we make the electrical connections between the processor and the motherboard. But this has changed dramatically in recent years. Ten years ago, the emphasis in SoC integration was on implementing GPU and I/O functionality in the same die as a high-performance CPU. In the future, advanced packaging technologies will be used to link different types of processors together, without forcing them to share a single manufacturing material or process node. This type of dis-integration may seem, at least initially, to be the antithesis of what Moore’s Law is intended to accomplish, but the performance and density improvements gained by matching each type of processor to its own best-fit transistor logic and design implementation often outweigh the negatives caused by separating a monolithic die into smaller chiplets. In fact, in his original paper in 1965, Moore stated that it “may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected.” Intel has already deployed technologies like EMIB (Embedded Multi-die Interconnect Bridge) and Foveros to connect chiplets in both two and three dimensions, such as placing HBM between CPU and GPU (as in Kaby Lake G, with EMIB), or to connect the 10nm compute die used in Intel’s upcoming Lakefield processor face-to-face with the 22nm I/O die directly below it. We also have plans to combine Foveros and EMIB together, in a technology called Co-EMIB, in which multiple 3D Foveros chips are connected via EMIB, allowing Intel to build chips far larger than the reticle size for any monolithic processor and scale out chip designs much more widely than before. EMIB Video Intel is already looking ahead past Co-EMIB toward a new standard called Omni-Directional Interconnect. One of the problems with stacking chips on top of each other using existing methods like through-silicon vias is that the amount of power you can push through such tiny wires is limited. ODI uses much thicker vias for power delivery, while offering the same capabilities as Foveros when deployed for 3D face-to-face bonding. ODI can be used to connect chiplets in a wide variety of configurations, including scenarios in which one die is partially buried and acting as a bridge between two others, completely buried, or even between two slightly overlapped die, with ODI used between them for thicker power pillars, allowing for chips to be packed much more tightly together. The ability to integrate 3D stacks of processors presents another method for improving silicon density that’s completely decoupled from a “classic,” exclusively transistor-focused concept of Moore’s Law. Traditional monolithic scaling will continue at 7nm with the introduction of EUV, then at 5nm and beyond, but it’s not the sole area where Intel expects to lead with continual, generation-on-generation improvements in both density and performance. The improvements that will drive future Moore’s Law scaling at Intel aren’t driven solely by process node shrinks or lithography improvements, but by collaboration between multiple engineering teams engaged in different parts of the design process. Here, Intel’s unique status as an integrated device manufacturer (IDM) is an advantage. Because Intel manufacturers its own products, there’s close collaboration between the design teams architecting future iterations of Intel processors and the fab engineers that will build those parts. We have the option to tweak an architecture to better match the capabilities of a process node, or to fine-tune a node to match capabilities we want to deliver in a given architecture. There’s no denying that we face significant challenges in our industry, but the future of Moore’s Law will be anything but a slow decline into obsolescence. Broadening the scope of how we deliver generational scaling improvements has widened the possible options for delivering them. I’ve never felt as optimistic about the long-term health of Moore’s Law as I do right now.
https://medium.com/performance-at-intel/a-bright-future-for-moores-law-9adc7d4bd39d
['Intel Author']
2020-12-09 19:34:42.536000+00:00
['Cpu', 'Technews', 'Technology']
1,079
Freelancers Vs Agencies: What’s Best for Outsourcing Software Development?
Freelancers Vs Agencies: What’s Best for Outsourcing Software Development? Deciding to outsource software development projects to Freelancers or Agencies is sometimes a difficult choice. Let’s find out the pros and cons of outsourcing software development to each. Vijay Khatri Follow Nov 15, 2021 · 5 min read Finally, you have decided to outsource your software development projects to a reliable outsourcing partner. However, finding the right and trustworthy outsourcing partner is like finding a needle in a haystack. Since the freelancers have taken over, it has become more difficult for organizations to choose between freelancers and agencies. Deciding over which one’s best and right for you without understanding what each approach offers is like sitting in a rowboat without oars. There are certain considerations while outsourcing your software development projects that you’d have to take care of. To help you decide what’s the best approach for your organization, the team at ashutec presents the pros and cons of outsourcing software development to freelancers and agencies. Pros of Outsourcing to Freelancers 1. Freelancers are Cheaper Economical reasons play the biggest factor while outsourcing software development. We all know that outsourcing is cost-effective and affordable than in-house development. As freelancers work independently, it is even more affordable and cheaper in comparison to the agencies offering similar development services. 2. Skill Specialists Freelancers are individuals with specific skillsets. Oftentimes, small-scale projects require only professionals that hone the expertise and skills in one of the areas. In such situations, you can handpick the best out of all freelance professionals for the task. Agencies also do have experts in specific skillsets too. But it’s no guarantee that all the agencies will have the expert professionals your project demands. 3. They are Flexibility to Work With You can hire freelancers whenever you want. They provide you with the flexibility to hire them for whatever number of hours you need on your project. Often they don’t have requirements such as minimum engagement level or long-term contracts to hire them. Further, you can hire multiple freelancers with different skillsets to carry out large software development projects. Cons of Outsourcing to Freelancers 1. Unreliability Freelancers work with multiple clients and on multiple projects at times. Oftentimes, you’ll find them juggling between different projects, which may delay your project’s deadline. This unreliability of project completion is why most organizations don’t consider outsourcing to freelancers. Further, the overworking nature of the freelancers makes them rush project delivery with poor performance. Thus, you’ll often find yourself compromising on the performance front of your software project. 2. No Senior or Supervisor As freelancers work independently, there’s no senior developer or supervisor to inspect the work or quality of the project. Additionally, it’s hard for you to make them understand your brand identity, expectations, and project deadlines. Thus, it becomes vital for you to have some level of oversight over your freelancers. What this means is you’ll have to take time to communicate, review the work and quality standards, then offer your feedback and decide on project deadlines, which may increase your workload. 3. Too Tight Work Schedules Freelance developers sometimes work on too tight schedules, thus managing them often feels like climbing a mountain. It takes a lot of effort from your side especially when you choose to work with multiple freelancers simultaneously. Many times, freelancers may not be available to work on your projects when you need them the most. Pros of Software Outsourcing to Agencies The major factor why software companies outsource to agencies is the reliability they get in return for their project’s completion. Thanks to this reliability, companies prefer to work with a well-established business rather than individuals. Let’s see other benefits of outsourcing software development to an agency: 1. Accountability and Less Oversight Outsourcing to agencies is a more trustworthy choice and requires less oversight. They would never do such things like compromising on quality, performance, or not responding to your queries or even quitting projects midway that tarnishes their image and reputation. Furthermore, you won’t be as much invested in project management. Agencies have professional project managers tasked to attend to each client’s requests and report development progress. They are responsible for streamlining the project communication and completing project milestones within deadlines. 2. They have a Good Track Record Well-established agencies have experience and a great track record of working with many clients in past. They may have completed projects similar to your niche and have built an extensive portfolio, case studies, testimonials, and reviews. Further, agencies want to retain their clients and form long-term relationships to sustain their workload rather than keep finding new clients as in the case of freelancers. To form a long-term relationship, they’ll work to prove their worth over and over, again and again. 3. Streamlined Project Management As opposed to freelancers, agencies have specialized teams and departments to take care of your project requirements and keep it on track. They are better at project management, especially the larger ones because of the large team size than an individual freelancer. Moreover, the specialized team of agencies can organize the workflow for you and offer access to enterprise resources. Thus, large projects with high complexity levels are better handled by agencies. Cons of Outsourcing to Agencies Hiring agencies have certain drawbacks as well. Let’s have a look at them: 1. Expensive Than Freelancers As mentioned, freelancers are cheaper than outsourcing to agencies because of the overhead fees and taxes agencies charge. Also, salary payouts to professionals and non-billable people to account for factor-in to make outsourcing to agencies more expensive than freelancers. Such factors create a vast difference in the per-project and hourly rates of agencies and freelancers. Even though agencies are expensive than freelancers, outsourcing to agencies in countries like India is still a cost-effective option and saves a significant amount on your software development. 2. Communication Delays You are not only outsourcing to an agency but the whole corporate administration. They have multiple people involved in a project, which is oftentimes great but leads to communication delays. Many agencies allocate a project manager or account manager to handle all the communication with the clients. But sometimes such a single point of contact may lead to communication delays as passing on the client requests depends only on them. To overcome this communication gap, many agencies allow direct access to resources working and delivering the project. Freelancers Vs Agencies: Who’s the Winner? While freelancers are the cheapest option to outsource your software development project, they are not as reliable as outsourcing to a well-established agency like ashutec. Freelancers are more trusted for small-scale, short-term projects to outsource, which many big agencies wouldn’t consider and be interested in. However, large, complex, and long-term projects require multiple hands of experts such as designers, developers, QAs, writers, etc., which agencies can easily fulfill. They can deploy multiple skillsets required on a project for which freelancers are incompatible. Thus, agencies are the right choice to outsource your large-scale, complex software and product development projects. Ashutec Solutions Pvt. Ltd. is one such agency with multiple experts having various skillsets ready to be deployed on your software project with high complexity level. Further, outsourcing to the ashutec team in India has many advantages over other geographies like Ukraine. We have a large portfolio and a good track record of working with small to large enterprises. Moreover, we offer unique, scalable, and maintainable software and product development services at cost-effective rates. Write to us at [email protected] to discuss your software project outsourcing needs further in detail.
https://blogs.ashutec.com/freelancers-vs-agencies-whats-best-for-outsourcing-software-development-7e4efb441ef6
['Vijay Khatri']
2021-11-15 05:18:05.895000+00:00
['Technology', 'Software Development', 'Outsourcing Services', 'Outsourcing', 'Freelancing']
1,080
Super Connect for Good 2021 Meet the Judges exclusive: Marcus Orton, Innovate UK EDGE
Super Connect for Good 2021 Meet the Judges exclusive: Marcus Orton, Innovate UK EDGE As we celebrate the success of Empact Ventures’ 2021 Super Connect for Good competition, we catch up with Regional Partner and Judge for the North of England region, Marcus Orton, Senior Innovation and Growth Specialist, Innovate UK EDGE. Top Business Tech Dec 17, 2021·7 min read As we celebrate the success of Hays’ and Empact Ventures’ 2021 Super Connect for Good competition, we catch up with Regional Partner and Judge for the North of England region, Marcus Orton, Senior Innovation and Growth Specialist, Innovate UK EDGE. Hays’ and Empact Ventures’ 2021 Super Connect for Good was a resounding success. The virtual final, which took place in November, saw the regional winners go head-to-head by pitching live for the Super Connect for Good 2021 Overall Champion. We spoke to Regional Partner and Judge for the North of England region, Marcus Orton, The Senior Innovation and Growth Specialist, Innovate UK EDGE. Innovate UK EDGE is an integral part of the grant funding organization, Innovate UK, which focuses exclusively on innovation-led companies with high growth potential. Innovate UK Edge was launched earlier in 2021, and is an initiative dedicated to helping such companies along growth journeys to scale. Orton joined Innovate UK a year ago as an independent consultant, working with medical device sector advisors and supporting companies with regulatory compliance, risk management, product development, assessments, and services. He was also a managing director of NHS spinout company SwabTech, which sought to bring blood recovery technology to the market. Though SwabTech was unable to obtain the funding needed to enter the market, it brought Orton to the “very exciting” opportunity to work with the Innovate UK team, operating in the North of the UK. Innovate UK EDGE provides a national service, is led by seasoned specialists with diverse experience and international track records at the business-end of innovation. Orton’s experience in product development, largely in medical devices, notably orthopaedic instrumentation and implant system development, positioned him well as a specialist in the organization. He shares that he joined a team with a holistic set of skills, including marketing, corporate finance, accountancy, local government and public sector, and support services, all from across various industries such as manufacturing, retail, hospitality and science and technology. “We’re an extended team of advisors,” he explains, “we can work either individually or as teams with clients from these sectors.” Navigating a turbulent climate Drawing on his experience at Innovate UK EDGE, Orton reflects on the current challenges facing startups today. He notes that startups must balance a focus on present operations while also remaining aware of the future as a business progresses. He shares that many contextual economic factors have impacted companies of all sizes in recent years: Britain’s decision to leave the EU, the Covid-19 pandemic, and the growing need for climate action vocalized at COP26. “These are immediate and long-term components to take into account, let alone rival and competitive competitor influences. “At Innovate UK EDGE, we are a regular touchpoint and sounding board to what a startup’s or SME strategy is, and how its business plan responds to that strategy and evolves with it, both in the context of the business developing and changing economy,” he said. As startups navigate this turbulent climate, Innovate UK EDGE provides a funded service “so that the cost to the participants in our work is primarily the time that they would invest in working with us,” explains Orton. Leveraging knowledge from Innovate UK, The Knowledge Transfer Network, The Catapults, and grant funding bodies such as Innovate UK and the EU Horizon project, Innovate UK EDGE provides a wealth of knowledge and connectivity to the businesses they work with, to meet the wider economic challenges head-on. In the context of the healthcare industry, Orton shares that “organizations are coming from Europe as they see the UK as a very promising economy to be working with. There is more investment going into the healthcare economy, and the demand is increasing, almost as a consequence of how successful our health system is in tackling existing challenges.” From an environmental aspect, society is taking a more active approach to mitigating climate change, which is reflected in emerging startups. “We’re seeing the increased interest in electric vehicles and alternatives to carbon-based gas, which will enable us to live more environmentally responsible lives.” For both of these trends, Orton emphasizes, “if individual companies have an idea, they need to realize the work that is needed to bring that idea to market, to evaluate its potential and to engage successfully with relevant stakeholders; be they manufacturers, suppliers, or distributors. These are the themes we see across our work.” The makings of a great startup “Creativity is at the heart of a good startup,” says Orton, “And an openness to what is driving them to create a new solution. This can mean a driving passion and perseverance that is internal to the company, but a great startup also needs to have a realistic understanding that a solution can sustainably serve a market need, and that other bodies will engage with the need that they are responding to.” Orton adds that this is an iterative process where startups need to remain flexible as they grow and learn from customers and market peers while remaining open and actively listening to stakeholders. Scaling up To those startups looking to scaleup, Orton reminds these startups that many people have been through this process before. “Speak to others who have been through the scaleup process. You don’t have to like everything they say to you, but keep in mind; it’s quite a journey. We’re very privileged to work with a variety of companies in a short amount of time, and from this, we have gained an insight into a multitude of successes and failures, and being able to gain from this is key. It’s important to remember that you’re not on your own in this process.” “I’ve been impressed by the Super Connect For Good programme. These hosts have a great knowledge of how virtual environments work, run excellent programmes and have provided a platform for some really fantastic businesses.” Orton emphasizes the importance of startups coming together to celebrate these accomplishments and the chance to meet other organizations that will support them in their journey. He adds that the quality of the companies he saw this year has given him hope that emerging technology will certainly have a role in overcoming the challenges that he previously outlined. Innovate UK EDGE supports startups by facilitating grant funding applications, business cases, and presentations. He describes the quality of the presentations at the Super Connect for Good as sophisticated, but sobering. “While it’s exciting to see such a high standard of presentations, it is also sobering, as so few recipients can be awarded grant funding. This is where our support extends into other aspects of supporting business growth”. Orton explains that there is a limitation on funding resources nationally, and if there is an successful appeal to the wider parts of the community, that have more resources, be it financial or otherwise, more businesses with much-needed solutions will thrive. “If investors took a lower return or placed more resources into businesses, the nation may see an overall improvement in economic progress.” He continues, “We’d also like to see more government money going into social services, transport systems, and businesses to not only to improve the support for our society and communities, but as an investment to develop and strengthen the next generation of products, services and professionals.” READ MORE: About Marcus Orton, Senior Innovation and Growth Specialist, Innovate UK EDGE About Marcus Orton, Senior Innovation and Growth Specialist, Innovate UK EDGE An experienced medical device developer, business leader and innovator, Marcus Orton is committed to delivering world-class products, services, business process solutions, sustainable and effective management of resources, founded on experience gained in leading multinational corporations, UK healthcare and academic organizations, startup and SME business settings. As an innovation specialist, he provides product, business and regulatory development support and coaching. This work draws on work with medical device business teams, healthcare service providers and academic technology transfer and teaching programs. Services include gap analysis, planning, risk management, theme-based workshops, documentation review and direct support for the development of business plans, grant applications and product development projects. As an NHS spinout, startup CEO, Orton leads the business and product development of an innovative enhancement to surgical blood recovery systems. While it did not succeed in reaching a commercial market position, the work successfully secured grant and investor resources, technology, supply-chain, and IP development. This drew on experience gained in medical device product development, product launch and scale-up delivery and has supported clinical innovation in orthopaedic medical device and surgical instrumentation sectors, the delivery of private healthcare change management, and technology transfer and transformation in academic research and business innovation management roles. For more news from Top Business Tech, don’t forget to subscribe to our daily bulletin! Follow us on LinkedIn and Twitter
https://medium.com/tbtech-news/super-connect-for-good-2021-meet-the-judges-exclusive-marcus-orton-innovate-uk-edge-9be494329481
['Top Business Tech']
2021-12-17 09:15:31.233000+00:00
['It', 'Technology', 'Innovate Uk Edge', 'Startups', 'News']
1,081
The shifting of the doctor-patient relationship
The shifting of the doctor-patient relationship You have a doctor’s appointment, you are in a hurry or stuck in a traffic jam and then you realize you can’t make it that day. So you call your doctor or email them that you can’t make it, and would like to get a new appointment. Sure your doctor might get a little upset, but there’s nothing you can do about it. Have you ever stopped for a second and thought about how people managed similar situations twenty years ago? We can’t deny that human relationships have gone through enormous changes in the last 20 years. For a better understanding of our main topic, let’s first consider some of the major turning points in human communication since the dawn of men. For eons, the fastest way of telecommunication was a guy walking from point A to point B. After the domestication of horses, the speed of telecommunication significantly increased. The postal service — one of the oldest form of telecommunication — has a long history, even in the roman empire, there was a state-mandated postal service, called cursus publicus. The roman system was quite advanced, it had contract-based messengers, stations to rest and change horses. One rider could cover an average of 100 miles a day if conditions were right. It took several weeks to facilitate one-way communication across the roman empire. There are several other examples of telecommunication from the pre-electric era (smoke and drum signals in ancient times, ships communicating with cannons and flags, etc.), but the real evolutionary step came with the industrial revolution. Electricity changed everything. It gave us the potential to communicate at the speed of light at great distances. The first device to utilize this potential was the electrical telegraph in the 19. century. Soon came the invention of the telephone which was a step forward in the quality of communication. Text messages were replaced by real-time vocal communication. However, even greater inventions came with the 20. century. The radio became the first wireless communication method and later on the television allowed the transmission of images. Broadcasting was born. In the 1960s development began on the most important invention of humanity since the fire. The ARPANET was the ancestor of the communication method we know today as the internet. In the 1990s the internet was becoming widely available and in the 2000s it conquered our mobile devices. It changed everything, but mostly the way we communicate in our daily lives. Let’s see how it affected our medical services! The doctor-patient relationship is a special one. You trust your doctor with details of the most personal aspects of your life, and you have trust in them that they have the knowledge to treat you well. There is confidentiality, mutual work, good news, and bad news. Communication between the doctor and the patient has a key role in maintaining a great doctor-patient relationship. The channel used to facilitate this communication was mostly personal meetings, face to face talk. But with the general shift in our daily communications, this relationship also started to move, became more virtual. We no longer have to visit our doctors to ask about every little thing. We can just write an email, or call them if we are in a tighter working relationship and have that “privilege”. It is becoming easier for our doctors to reach out to us. Sensitive medical data is still a concern and most of the time it’s handled personally, this especially applies to lab results. This is probably that one key element of communication that is resilient to becoming part of our virtual communication spheres. There is another aspect of this relationship that has gone through changes due to our ever-increasing technocracy. The so-called physician superiority is starting to evaporate. In those times when the general public had little-to-no access to medical knowledge, doctors tended to occupy an “information high ground”. This caused an uneven doctor-patient relationship, positioning the treating doctor above the patient. Accurate medical information is in our pockets. We just grab our smartphones and look up what we want to know. We have entered the information age. Today’s patients have a broader knowledge of their condition and thus reducing the need for the old-fashioned and sometimes condescending explanations of their doctors. This doesn’t mean that the need for medical professionals will decrease overnight. However, shared decision making is becoming easier with those patients whose becoming more insightful in their own condition. What will the future hold? How this relationship will evolve? Will there be an A.I. so sophisticated that will replace our treating doctors? For more related content, subscribe to Diabetes Guru!
https://medium.com/@diabetes.commun/the-shifting-of-the-doctor-patient-relationship-b58d155e543f
['Diabetes Guru']
2020-01-09 10:38:51.742000+00:00
['Healthcare', 'Health Technology', 'Relationships', 'Technology', 'Doctors']
1,082
A Beginner’s Guide To Computer Vision
Computer Vision Before we dive into the various CV techniques, let’s explore the human body part that computer vision is trying to emulate in terms of functionality. Most humans don’t give much thought to vision; it’s a bodily function that automatically works with little to no deliberate influence. Photo by v2osk on Unsplash The human vision sensory system has developed over thousands of years to provide humans with the ability to extrapolate scenery meaning and context from the light that is reflected by objects in our 3-dimensional world, into our eyes. Our eyes and brain can infer an understanding of environments from reflected light. Our visual system equips us with the ability to determine the distance of objects, predict the texture of objects without directly touching, and identify all sort of patterns and events within our environment. Computer Vision is the process by which we try to equip computer systems with the same capabilities that the human's visual sensory system possesses. An appropriate definition for computer vision is as follows: Computer Vision is the process by which a machine or a system generates an understanding of visual information by invoking one or more algorithms acting on the information provided. The understandings are then translated into decisions, classifications, pattern observation, and many more. Our visual sensory system consists of the eyes and the brain, although we understand how each component of the eyes such as the cornea, lens, retina, Iris etc., we don’t fully understand how the brain works. To create algorithms and systems that have the capability of extracting contextual information from images, causations of patterns have to be observed. Then solutions can be derived from the understanding of the causes and effect of specific patterns. There are a lot of applications of Computer Vision, here are a few:
https://towardsdatascience.com/a-beginners-guide-to-computer-vision-dca81b0e94b4
['Richmond Alake']
2020-09-22 23:23:37.211000+00:00
['Deep Learning', 'Artificial Intelligence', 'Machine Learning', 'Data Science', 'Technology']
1,083
The Uncanny Valley Is Our Best Defense
The Uncanny Valley Is Our Best Defense Our bodies recognize the dangers of simulation, and we should too Photo: Coneyl Jay/Getty Images While humans are drawn to and empowered by paradox, our market-driven technologies and entertainment appear to be fixed on creating perfectly seamless simulations. We can pinpoint the year movies or video games were released based on the quality of their graphics: the year they figured out steam, the year they learned to reflect light, or the year they made fur ripple in the wind. Robot progress is similarly measured by the milestones of speech, grasping objects, gazing into our eyes, or wearing artificial flesh. Each improvement reaches toward the ultimate simulation: a movie, virtual reality experience, or robot with such high fidelity that it will be indistinguishable from real life. It’s a quest that will, thankfully, never be achieved. The better digital simulations get, the better we humans get at distinguishing between them and the real world. We are in a race against the tech companies to develop our perceptual apparatus faster than they can develop their simulations. The hardest thing for animators and roboticists to simulate is a living human being. When an artificial figure gets too close to reality — not so close as to fool us completely, yet close enough that we can’t tell quite what’s wrong — that’s when we fall into a state of unease known as the “uncanny valley.” Roboticists noticed the effect in the early 1970s, but moviemakers didn’t encounter the issue until the late 1980s, when a short film of a computer-animated human baby induced discomfort and rage in test audiences. That’s why filmmakers choose to make so many digitally animated movies about toys, robots, and cars. These objects are easier to render convincingly because they don’t trigger the same mental qualms. We experience vertigo in the uncanny valley because we’ve spent hundreds of thousands of years fine-tuning our nervous systems to read and respond to the subtlest cues in real faces. We perceive when someone’s eyes squint into a smile, or how their face flushes from the cheeks to the forehead, and we also — at least subconsciously — perceive the absence of these organic barometers. Simulations make us feel like we’re engaged with the nonliving, and that’s creepy. We confront this same sense of inauthenticity out in the real world, too. It’s the feeling we get when driving past fake pastoral estates in the suburbs, complete with colonial pillars and horse tie rings on the gates. Or the strange verisimilitude of Las Vegas’ skylines and Disney World’s Main Street. It’s also the feeling of trying to connect with a salesperson who sticks too close to their script. In our consumer culture, we are encouraged to assume roles that aren’t truly authentic to who we are. In a way, this culture is its own kind of simulation, one that requires us to make more and more purchases to maintain the integrity of the illusion. We’re not doing this for fun, like trying on a costume, but for keeps, as supposedly self-realized lifestyle choices. Instead of communicating to one another through our bodies, expressions, or words, we do it through our purchases, the facades on our homes, or the numbers in our bank accounts. These products and social markers amount to pre-virtual avatars, better suited to game worlds than real life. Most of all, the uncanny valley is the sense of alienation we can get from ourselves. What character have we decided to play in our lives? That experience of having been cast in the wrong role, or in the wrong play entirely, is our highly evolved BS detector trying to warn us that something isn’t right — that there’s a gap between reality and the illusion we are supporting. This is a setup, our deeper sensibilities are telling us. Don’t believe. It may be a trap. And although we’re not Neanderthals being falsely welcomed into the enemy camp before getting clobbered, we are nonetheless the objects of an elaborate ruse — one that evolution couldn’t anticipate. Our uneasiness with simulations — whether they’re virtual reality, shopping malls, or social roles — is not something to be ignored, repressed, or medicated, but rather felt and expressed. These situations feel unreal and uncomfortable for good reasons. The importance of distinguishing between human values and false idols is at the heart of most religions, and is the starting place for social justice. The uncanny valley is our friend.
https://medium.com/team-human/the-uncanny-valley-is-our-best-defense-9006f87d3647
['Douglas Rushkoff']
2020-12-17 16:09:14.194000+00:00
['Uncanny Valley', 'Society', 'Book Excerpt', 'Culture', 'Technology']
1,084
The Social Music Tech Revolution
So now that we’ve cleared up what social music tech is, the ways that it is broken today, and outlined why exactly a “success story” can’t be found, let’s dive into what the “winner” might look like in the future: 1. Leverage My Data — One of the most discussed trends of the last five years has been that of “big data”. The most innovative products have found a way to utilize data that we already create in order to deliver actionable value back to us. Listener data is something that we create by the truckload today, yet it is largely left untouched. You may be asking the question: “Hold up — companies like Spotify and Pandora are already leveraging my data really well — through creating personalized playlists and radio stations that I love”. Yes — this is true — but this is leveraging listener data for discovery rather than social. Think about it — when was the last time that your listener data uniquely sparked a social interaction for you? The most successful example that we have seen is the “Spotify Wrapped Campaign” which happened at year end. We finally got a glimpse under the hood of our listener data, and people showed up on social media in record numbers to discuss the data. The social music tech solution will not just leverage data for social purposes annually…instead it will constantly leverage this data every second in order to visually show where my music taste overlaps with friends, where the opportunities for recommendations and discovery are, as well as areas where brands/labels/artists might be able to join in the conversation and monetize users. The latter of which might be most important for a start-up to enter the space and remain afloat. After speaking with Conor Healy, Director at Music Tech Investment Group Raised in Space Enterprises, he mentioned that “Turntable.FM is a prime example of the challenges surrounding these companies. Despite scaling to millions of users, Turntable never found effective ways to monetize these users. That, along with licensing challenges/label relations ultimately stifled growth and led to the platform’s swift demise”. Photo by Daniel Fontenele on Unsplash 2. The Gamification of Gratification — Who doesn’t love the feeling of a successful music recommendation? We tell a friend about a song we think they’ll love, then the next day they tell us how we hit the nail on the head. This feels good as hell! It is a rush of serotonin, and to be honest, it’s downright addicting. It’s the reason that John Cusack is making mixtapes in High Fidelity, and my friends and I constantly fight over the aux chord when we’re in the same car. What if we could both bottle the social gratification from a successful song recommendation, and increase the frequency that these “lightning bolts” occur? The successful social music tech of the future will leverage data and gamification in order to tap into the serotonin release of a recommendation gone well. As a song is recommended to a friend, the life of that song will then be tracked moving forward. Was it listened to 35 times by the recipient? Forwarded to 6 friends? Added to their “Sweat It Out” playlist? All of these insights are incredibly valuable. If they are successfully delivered back to the recommender, we now have a very sticky product. Now what if this were boiled down to an “influencer score” that could be competed and boasted amongst friends? John Cusack might own a tech company rather than a record store. 3. Let Me Consume — As of 2019, 40,000 new songs were added to Spotify every day. That’s insane. For many of us, this is entirely too much. The content overload has left many people huddled in a corner in the fetal position, listening only to the music they’ve always known and loved. When we think about it, however, music isn’t the only medium which has experienced a boom in content. We have seen similar trends within news, video, sports, health, and countless other areas. The only difference between music and the rest is the fact that music has not drastically evolved in how we consume. We need to look no further than Instagram to see how content delivery has evolved for Millenials and Gen Z. In 2018, the content format of the “story” surpassed the “feed”, growing at a rate 15x that of feeds. Lengthy, permanent content feed posts have taken the backseat to quick, temporary story posts that are a maximum of 15 seconds. Consumers have short attention spans and they want to find the newest and best content with minimal effort. I am not arguing that artists need to reduce the length of their songs. I am arguing that the social music tech of the future will find a way to deliver moments of discovery within short and powerful song clips, no longer than 15 seconds, in a convenient place where users are already consuming.
https://medium.com/swlh/the-social-music-tech-revolution-40557cd5f647
['Jack Diserens']
2020-04-20 18:38:18.864000+00:00
['Music Technology', 'Tech', 'Music', 'Social', 'Spotify']
1,085
Military Goes First
An excerpt from Convergence, How the World Will Be Painted With Data. By Sam Steinberger The military’s first practical forays into AR-like technology date back to just before World War II. In less than a century, AR technology has moved from infrared systems to “X-ray” vision. Night Vision One of the first forms of AR to be used by the military was night vision. Modern night-vision systems, which allow soldiers to see in light levels approaching complete darkness, really had their start in 1929 when Hungarian physicist Kálmán Tihanyi invented the first infrared-sensitive electronic camera. This classified technology was used by the British to detect enemy aircraft. Soon after, scientists working in Germany began to explore the use of infrared technology. By the end of WWII, they had developed an infrared weapons system that used a beam of infrared light to illuminate objects. This beam effectively “painted with light” to create visibility through special optics systems sensitive to the infrared spectrum. These “active infrared” systems were developed for tank crews, as well as for individual soldiers, so they could fight in darkness. However, a significant disadvantage of the active system is that just shining a floodlight in the visible light spectrum would give away one’s position, anyone with an infrared viewing system would also be able to spot and potentially disable an infrared light source. Meanwhile, the U.S. Army also had secret programs of its own to develop an active infrared system for use by individual soldiers in the 1940s and 1950s. By the 1960s, the U.S. Army had developed the “starlight scope,” an evolution of the active system and used in the Vietnam War. This scope was a passive system, essentially magnifying the available light, like the moon or stars, in order to make a scene more visible. Although passive systems were less bulky and lighter than the active systems, the rifle mounted scopes that used passive technology were underpowered compared to today’s devices. The fundamentals behind passive infrared systems, however, informed today’s night-vision technology. The next major development in augmented night vision came in the form of thermal-vision technology, which captured the infrared (thermal) energy emitted by objects. In 1978, microbolometers that measure radiant energy were invented, giving the U.S. military a drastically improved version of thermal imaging. Although the foundation for thermal imaging extends earlier than Tihanyi’s work, microbolometer technology made thermal imaging more portable and realistic for individual use. Today’s night-vision technology can amplify light by 50,000 times or more, and adaptive technology exists to protect soldiers’ eyes against issues like temporary blindness caused by sudden light exposure. Thermal imaging is now also used for a range of applications from satellites to rifle scopes. Smartphones can even become thermal AR devices, like Caterpillar’s CAT S61, which contains the FLIR Lepton thermal camera embedded in the phone. Given the military’s focus on interoperability, individual AR systems used by soldiers would have to incorporate, or at the very least be compatible with modern thermal and night-vision technology. HUDs Modern Heads-Up Displays (HUDs) can be traced to WWII. As pilots struggled to find their targets while over enemy territory, they had to rely on verbal instructions from their crew. Eager to convey similar information by mechanical means, the military developed prototypes to provide pilots with flight information. However, these displays proved to be static and bright, a particular challenge for pilots flying at night. A solution was developed. By projecting information onto an angled piece of glass or the window of the cockpit, the HUD was able to provide a pilot with radar or targeting details, without the pilot having to divert from looking up and out of the aircraft. The first HUD in operational service was built by Cintel and eventually acquired by what is today known as BAE Systems. It was used on the maritime strike aircraft, the Blackburn Buccaneer, which entered service trials with the British Royal Navy in 1961. Designed for high-speed, low-level operation over land and sea to carry out split-second attacks, the Buccaneer’s pilot needed aircraft attitude and weapon aiming information in a display that wouldn’t divert the pilot’s gaze. The modern HUD was born. Later in the 1960s and into the 1970s, it was incorporated into American military aircraft. Iterations of the technology would see further use in commercial aircraft, spacecraft, cars, and even a $400,000 helmet, designed to let Lockheed Martin F-35 Lightning II pilots see through their own aircraft with “X-ray” vision. HoloLens For the past three decades, the U.S. military has been on a mission to develop a personalized augmentation system that would assist its soldiers on the battlefield. Back in 1989, the Army demonstrated its Soldier Integrated Protective Ensemble (SIPE), a technology demonstration that brought wearable sensors and enhanced communications systems to individual soldiers. Although it proved feasibility, the program wasn’t particularly soldier-friendly. Grunts had to haul around a gargantuan thermal sensor device and a clunky head-mounted display (HMD), not to mention carrying around a backpack battery to power it all. Nevertheless, the Army decided the concept was sound, and SIPE gave way to the Land Warrior program. The combination of small arms with high-tech equipment at Land Warrior would lead the military into the 21st century. The program incorporated electronic systems, like cameras, thermal sights, and laser rangefinders, onto small arms like the M4. The helmet system provided mounts for optics that allowed a soldier to visualize information provided by equipment such as the thermal sight. Soldiers had communications devices and other electronics integrated with their backpacks, along with what was essentially a mousepad on the soldier’s chest. By the time the program was canceled in 2007, systems had decreased from 86 pounds in 1999 to 40 pounds — on top of the approximately 80 pounds of full combat gear already carried by a soldier. Each system cost more than $85,000. Despite the advances in wearable sensors and the miniaturization that evolved over the decade, Land Warrior at its best was still not the Augmented Reality of science fiction or the Augmented Reality that the Army felt was adequate. Soldiers’ systems incorporated AR elements, like thermal- and night-vision technology, but the military wanted a system that would perform functions like identify friend or foe in the battlefield, seamlessly transition between electro-optical tech and map the soldier to the environment he or she was in. It wanted an Urban Warfare Augmented Reality (UWAR) system, especially as top staff in the Army increasingly feel that the future of war will be fought in dense, complex urban environments. The Battlefield Augmented Reality System (BARS), initially funded by the U.S. Office of Naval Research, was driven by the goal of creating an infantry system analogous to a pilot’s “Super Cockpit,” according to the U.S. Naval Research Laboratory. Features of BARS included a database that could be updated by all users and commercially-available hardware components. Significant research went into understanding the way AR systems handled things like depth perception, visual occlusion and the visibility of AR displays, information filtering, object selection, collaborating across a shared database, and the requirements of embedded training. In 2014, the U.S. Army announced the introduction of the ARC4, by Applied Research Associates, Inc. The system attached to a soldier’s helmet and allowed users to map themselves to a battlefield environment. Commanders could provide their soldiers with maps and navigation information, beamed to the soldier’s field of vision. Today, the U.S. Army Communications-Electronics Research, Development, and Engineering Center (CERDEC) is building on the advances made by BARS to develop the Tactical Augmented Reality (TAR) system, an even more futuristic system that will allow soldiers to map themselves to an environment, quickly and easily identify friendlies and targets and provide soldiers easily accessible, real-time battlespace information. While TAR is still in development, other countries have expressed interest in AR on the battlefield. In 2016, the Israeli army reportedly purchased HoloLens devices, exploring their potential to improve battlefield strategy and training opportunities. That same year, there were also reports of a Ukrainian company named Limpid Armor working with the military in its country to implement the Circular Review System for tanks. A series of cameras attached to a tank would give a commander wearing the HoloLens a 360-degree view around the vehicle. It’s a concept that is already in place with the helmets of pilots flying Lockheed Martin F-35 Lightning II fighters. Following the launch of Magic Leap’s AR system, the U.S. Army announced an opportunity for AR companies to provide the military with AR systems. And it wasn’t just a device here and an app there. Magic Leap and Microsoft were in the running for a program that could see the military purchase 100,000 AR devices. By the end of 2018, the U.S. Army announced Microsoft would be awarded a $480 million contract and an opportunity to put the company’s new HoloLens to the test. Both parties will likely walk away with lessons learned. Microsoft will need to make its AR system ready for battle, which will include hardware and software upgrades. Some pundits suggest the HoloLens will be broken into its components and rethought for this rugged application. Soldiers reliant on the technology can’t be slowed down to restart their AR systems or make repairs if the system is exposed to the elements. If all goes well, the contract stipulates that Microsoft will provide the U.S. Army with as many as 100,000 AR devices that will “increase lethality by enhancing the ability to detect, decide and engage before the enemy,” according to the government’s program description. The U.S. military will face not only computing and hardware challenges, but cybersecurity problems as well. The military’s communications systems will need to operate despite jamming attacks or worse. Enemies that could hack a networked system replicating soldiers’ eyes and ears could decimate an attacking or defending force. As it has for the last few decades, the military will continue to go first in Augmented Reality, pushing the boundaries of what’s possible and opening up new ways of using today’s technology. Challenges to Overcome There are plenty of boundaries to push, chief among those are stability and accuracy. “[C]urrent development is far behind the need of urban warfare,” noted researchers in a recently published study looking at the capabilities of AR compared to the demands of UWAR. “The correctness of combat action and the speed of execution, will not only impact the success of a military confrontation but also result in a significant difference in combatants’ survival,” continued the peer-reviewed article, “Survey on Urban Warfare Augmented Reality.” The military’s version of Hololens or another AR system will be put to the test. Rapid head movements, poor operational conditions, and vibrations inherent in urban combat make registration and stability very challenging. The gyroscopes, GPS antennas, magnetometers and inclinometers, delicate instruments in some cases, will have to withstand extreme temperatures, moisture levels, and impacts. Urban environments have a variety of variables, including distribution of targets and signal-blocking buildings that will have to be overcome through powerful computing and robust networking. The system cannot fail in its registration of all users’ locations, keeping that information continuously and simultaneously up-to-date. And, of course, the system will have to be portable and come with enough battery life to keep it functioning for as long as possible. The Hololens has good computing horsepower and does a good job of mapping itself to its environment in a non-combat situation, said Dr. Ronald Punako, Jr., who has studied AR and VR technology. The software looks at movement, although he noted a combat situation could present problems for a commercial-grade AR system like an out-of-the-box Hololens system. The advantages of a well-built, functioning AR system are just as real as the obstacles, however. AR systems outperform more traditional navigation systems, according to researchers. The intuitive nature of AR navigation reduced mistakes made by combat-stressed soldiers and situational information reinforced the situational awareness of fighters. Commanders also improved their operational task planning with the assistance of AR. Keep Up The Good Work Then there’s training. VR is further along in its implementation for training, but a robust AR system could also prepare soldiers and address poor combat practices. And if an AR system is developed for combat, it makes sense to use it for training as well. For now, VR is better suited for training because the hardware has already been established, said Tyler Gates, managing principal at Brightline Interactive, a team of creative technologists whose clients include future-looking government agencies like DARPA. VR has had a role in training U.S. security forces for over five years. An untethered, free roaming training system, developed by Raytheon Company and Motion Reality Inc., was demonstrated at joint exercises in 2012. The VIRTSIM training supported three squads of soldiers, armed with functional replica weapons. VIRTSIM is now used by soldiers in Malaysia and the United Arab Emirates, in addition to the U.S. The Automated Serious Game Scenario Generator for Mixed Reality Training (AUGGMED) program in Europe is another example. In early 2018, VR security training in the multi-country program, led by business group BMT, incorporated physical objects and locations into its exercises. AUGGMED was used by port security forces in Greece, who were training for potential terrorism-related threats. The system incorporated a hybrid experience by integrating on-site trainees alongside other trainees working remotely via VR. AUGGMED has three degrees of training: a limited VR with no mobility or tactile feedback, an immersive VR experience with limited mobility and tactile feedback, and an on-site fully-immersive and mobile experience with locally networked colleagues, allowing for a range of training experiences. The system transforms a VR headset into a type of full AR/VR device. HTC Vive headsets go one step beyond VR by attaching a pass-through, outward-facing USB 3 camera on the front of the headset, providing a forward-looking perspective. The workaround is needed because devices like the Hololens have a poor field of view and the image quality isn’t the best for trainees, according to Dr. Robert Guest, part of the AUGGMED team. He’s been a leading simulation developer at the University of Birmingham for the past decade. AR Meet Fashion One of the more interesting tangential integrations with AR may be fabric. Nanotechnology allows fabrics to be impregnated in such a way that the material reflects certain electromagnetic signatures. This radar- and microwave-absorbing fabric could be integrated with an AR system, allowing soldiers to identify others just by the radar signature of the uniform they’re wearing, helping soldiers avoid friendly-fire incidents and more effectively target enemy forces. The system could also be used by emergency personnel, where medics might be able to identify and triage injuries using a combination of functional fabric, smart sensors and an AR system, points out Eduardo Siman, a former technology consultant and early investor in Virtualitics. First responders to the scene of an earthquake, for example, might be able to better pinpoint the location of victims. Meanwhile, a project at MIT conceived the idea of using smart belts to passively measure radiation levels, a difference-maker for nuclear technicians. Sea and Air Land-based soldiers and commanders aren’t the only military personnel expected to benefit from AR. The U.S. Navy has already tested the Unified Gunnery System Augmented Reality, or GUNNAR, with plans to develop an AR helmet to facilitate better communications between crews aboard America’s fleet. The software helps by issuing “fire” and “cease fire” commands, and provides virtual training. The training alone is a huge cost saving and readiness boost because live rounds aren’t fired, but shipmates are still able to prepare for battle. The software runs on a helmet made by AR hardware developer Daqri. A future combat system dreamed up by the BAE Systems and the Royal Navy aims to give a lightweight Hololens system to bridge watch officers on warships. Bridge watch officers are responsible for monitoring a number of instruments and readouts to safely guide the passage of the ship. The new system would put that information onto a digital surface visible to the officer. Artificial intelligence would assist the officer in interpreting various readouts, alerting military personnel to potential hazards and creating a more efficient workflow for sailors on the ship. The BAE Systems technology is advancing rapidly. The rollout, part of a $27 million advanced combat systems upgrade, is slated for testing in 2019 and could be fully integrated into the Royal Navy’s fleet by the end of the year. The system for the Royal Navy is akin to a lightweight version of one of BAE System’s most advanced AR helmet-mounted HUD systems: the Striker II. Intended for pilots, the full-color system has integrated night-vision and picture-in-picture display capabilities, according to its developer. Similar to what some VR headsets have, the Striker II also has spatial audio technology, or “3D audio.” Spatial audio technology allows the pilot to auditorily pinpoint threats by delivering warning signals to precise areas of the pilots’ earpiece. Despite all the computing going on to deliver sensory input to the pilot, the system has “near zero” latency, according to BAE Systems. The Royal Australian Air Force expects AR to be an important pillar of its military technology platform. As early as 2016 Saab Australia and the country’s military were looking at ways the Hololens could be useful for strategy, threat management, and training. The country’s army has also looked into the ARC4 system. Then there’s the AR helmet-mounted HUD system, the Gen III Helmet Mounted Display System, developed for the Lockheed Martin F-35 Lightning II. Developed jointly by Rockwell Collins, Elbit Systems and Lockheed Martin, the helmets, at $400,000 a pop, are custom-made to the head shape of each pilot and include real-time visuals provided by mounted cameras on the aircraft, as well as night vision and thermal imagery. A magnetic field generated by a transmitter in the pilot’s seat allows the system to track the pilot’s head movement. The Military Leads The technological innovation improving AR systems in the military shows no signs of slowing down. Complex, functional AR systems might quickly find their way into land-based combat. They’re already seeing limited use in the form of night-vision and thermal-imaging technology on the ground. In the sky, this technology is even more important for pilots. On the sea, they’re slated to be in use on the bridges of the most advanced navies in the world. Fabrics and other peripheral materials, embedded with sensors or nanotechnology, could be integrated with an AR system, creating a world that’s more deeply permeated with data and new insights. For now, the military leads the way.
https://charliefink.medium.com/military-goes-first-de6f760a0b53
['Charlie Fink']
2019-03-14 16:26:00.715000+00:00
['Technology', 'Virtual Reality', 'Charlie Fink', 'Augmented Reality', 'Military']
1,086
Hiring! Operations Team Lead
This is a posting for an experienced leader who believes successful patients can and SHOULD play a much bigger role in our health system. If you are a hard worker, strategic thinker, self-starter and interested in leading multidisciplinary teams to help make this vision a reality, let’s talk :) — — — — — — — — — — — — — — — — — InquisitHealth is a healthcare technology company based in New Jersey. Our clients are health plans, national patient foundations, hospital systems and local governments. We help everyone utilize evidence-based peer-to-peer mentoring to improve health outcomes. Peer mentors are patients who manage their own chronic disease well. We train our peer mentors to coach other patients-like-them to better health. A little bit about our culture: We work hard. We have a lot on our plate. And we are always looking to add more. We thus greatly value prioritization and being highly efficient. We value developing and maximizing the talents of each team member. We value taking initiative, asking the right questions, and taking ownership. We value lucid thinking to distill complexity. We are ever-evolving with new approaches and new experiments. Creativity, grounded in strategic thinking, is key. We prioritize each individual patient, mentor, and client, while thinking about scale. We work in an open office. We are casual. RESPONSIBILITIES Lead a multidisciplinary team to assess tactical and strategic challenges, identify key gaps/constraints, and define & prioritize work efforts in alignment with the company’s OKRs and vision. Continuously analyze our programs to guide innovative projects and improvements in quality and operational efficiency Help build internal and external resources to better support clients, patients and mentors. Contribute data-driven insights to support and lead strategy discussions Full time role at our office in northern New Jersey (River Edge). WHO YOU ARE Quick preface: We move fast, iterate quickly, experiment often, and do what we can to help our clients, patients and mentors. Our folks thus need to be comfortable with ambiguity, creative self-starters, nimble, quick learners, and great at problem-solving. Prior experience with health care operations or project management; management consulting background preferred Comfortable working with data to distill insights Ability to quickly adjust priorities and balance needs Articulate over the phone and able to draft clear prose Strong sense of ownership & personal accountability Be a champion of OKRs across the company BENEFITS & PERKS Title + compensation (including stock options) based on experience Gym membership to stay healthy and sharp Health, dental and eye insurance Health Care FSA 401K TELL US ABOUT YOU! Where do you live? Ideal start date? Why is this a great opportunity for you? How does what we do resonate with your background? Please send resume + cover letter to [email protected]!
https://medium.com/@inquisithealth/hiring-operations-team-lead-7e68d6d1337d
[]
2019-03-22 20:57:32.584000+00:00
['Startup', 'Operations', 'Technology', 'Peer To Peer', 'Healthcare']
1,087
Water on the Moon Isn’t Just Hiding in the Shadows
Water on the Moon has been seen before in the shadows of craters. Now, water has been detected on the sunlit face of the Moon as well. This illustration highlights the Moon’s Clavius Crater with an illustration depicting water trapped in the lunar soil there, along with an image of NASA’s Stratospheric Observatory for Infrared Astronomy (SOFIA) that found sunlit lunar water. Image credit: NASA Water is essential to the exploration of the Solar System, and this presents one of the greatest challenges to the colonization of space. The discovery of water ice hidden in the dark recesses of deep craters opens new resources for astronauts as they reach out beyond the Earth. However, accessing that water, deep in treacherous craters, would be challenging. Water was just found in Clavius Crater, one of the largest craters in the Moon’s southern hemisphere, which is visible from Earth. This suggests water deposits may be found throughout the lunar surface. Researchers previously saw evidence of water on the sunlit surface of the Moon, but evidence was uncertain. Hydroxyl, a fragment of molecules consisting of one atom each of hydrogen and oxygen, could have created the signal seen by researchers. Rising Above the Din SOFIA lifting off on a mission to explore the Cosmos. Image credit: Screenshot from NASA video. The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a modified Boeing 747SP aircraft, allowing researchers a chance to lift an infrared telescope above most of the interfering water-filled atmosphere of Earth. Using the Faint Object infraRed CAmera for the SOFIA Telescope (FORCAST) aboard SOFIA, researchers examined the Moon in infrared light, centered at a wavelength of 6.1 microns, confirming the presence of water. This wavelength is seen from molecules of water, but not from hydroxyl, allowing researchers to confirm they were looking at full water molecules. “We observed a region at high southern latitudes near Clavius crater and a low-latitude portion of Mare Serenitatis… Data from SOFIA reveal a strong 6 [micron] emission band at Clavius crater and the surrounding terrain,” researchers wrote in Nature Astronomy. The SOFIA discovery explained in one minute. Video credit: NASA/Ames Research Center The atmosphere of Earth is also fairly quiet at this wavelength, aiding observations with SOFIA. “Prior to the SOFIA observations, we knew there was some kind of hydration. But we didn’t know how much, if any, was actually water molecules — like we drink every day — or something more like drain cleaner,” said Casey Honniball, from the University of Hawaii at Mānoa in Honolulu. Water You Doing There? A look at water on the Moon, SOFIA, and the future of space exploration. Video credit: The Cosmic Companion How the water got on the Moon remains a mystery. One possibility is it arrived via micrometerorites, depositing water on the Moon. Or, it might form as hydrogen arriving on the solar wind reacts with oxygen in the surface of the Moon forming hydroxyl molecules, which are transformed into water through the impact of micrometeorites. The water may be now caught up in glass beads, or could be hidden between grains of soil in the form of ice. All in all, the lunar surface appears to hold, roughly, 12 ounces of water in each cubic meter of lunar surface. NASA is currently planning to send the first woman and the next man to the Moon in 2024. As human exploration of the Solar System expands, it will be vital to provide colonists with water. Additional studies will be conducted of the Moon using SOFIA and other instruments, searching for answers to how this water forms, moves, and stays stable in the harsh, sunlit conditions on the lunar surface.
https://medium.com/the-cosmic-companion/water-on-the-moon-isnt-just-hiding-in-the-shadows-4bda282eeb8f
['James Maynard']
2020-10-26 21:39:39.770000+00:00
['Technology', 'Future', 'Space', 'Science', 'NASA']
1,088
Crypto-Keynesian Lunacy
My article on smart contracts has had quite a reaction, much like the blockchain article before it. I’ve had a lot of praise and criticism on the article, much coming from the Ethereum/altcoin crowd which didn’t like what I said. Chief among the critics vying for my attention is one Kyle Samani, who more or less threw the kitchen sink at my arguments in the form of a 19 message-long tweet storm about my article. I don’t follow him on Twitter and wouldn’t have noticed, were it not for him specifically pinging me on telegram to take a look. Now, I don’t normally spend my time arguing with people that are seeking to waste my time, but in this case, I’m going to make an exception because he so clearly embodies what I call Crypto-Keynesian lunacy. In this article, I’m going to make the case that the economic world view of people like Kyle are fundamentally what drives his criticism and not the technical or social reality. Crypto-Keynesian Fallacy #1: Aggregate Numbers Tell an Accurate Story The first argument for smart contracts is that there’s a lot more “developer activity” on Ethereum’s Turing-complete Solidity. This argument is about as useful as focusing on aggregate demand and aggregate supply that Keynesians love so much. Economic activity by itself doesn’t tell you anything. People could be trading the same egg back and forth for a dollar a billion times and that would count in the economic activity statistics as a billion dollars of activity. The aggregate numbers simply don’t mean very much because they have at best a very weak correlation with the actual value added. Belief that developer activity actually means a lot is mistaking action for value. It’s true that creating value requires activity, but not all activity adds value. In fact, a lot of activity reduces value. Actually figuring out which activity is valuable and which add nothing or even have a negative effect is not easy and requires a lot of research and technical due diligence. This is why Keynesians focus on easily measured numbers, not on actual reality. Looking at easily measured numbers is much easier than doing the hard work of actually evaluating complicated situations. Crypto-Keynesians do the same thing. If you look at protocol development Bitcoin has many more devs. Most of the “developer activity” on Ethereum is in ICO development and not protocol development. ICOs have been a cash cow for ICO issuers for over a year now. But raising money does not mean you’re adding value any more than digging and filling a ditch adds value. Most of these projects have zero users or even a product. Many haven’t released anything for years, nor do they need to since they have no obligations to the token holders. In other words, developer activity is a really poor proxy for usefulness or value-add and ultimately, activity does not mean success. Google Glass, for example had a lot of developer activity, but that didn’t mean it was actually useful, viable or successful. Crypto-Keynesian Fallacy #2: Centrally Designed Systems Work Better than Organic Bottom-Up Reality-Tested Systems Keynesians are famous for their faith in central planning, especially as a way to increase demand. Entire industries like health care, education and housing have felt the heavy hand of government intervention and desire to do “what’s best for us” despite our protests. Crypto-Keynesians are the same way. Their faith in centrally planning products perfectly from the outset is astonishing and largely driven by hopium. What’s more, a lot of these centrally designed systems are much less efficient and require a lot more overhead. Provably fair gambling can be done without the large overhead of an expensive blockchain. You can prove something existed at a certain point in time without the cost of millions of transactions that get replicated thousands of times. The hubris of central planning is that they not only think everything will work, but that it’s the best solution from the outset. But that’s assuming something works. Most of the projects in the crypto space have added zero value. In fact, many have swindled, scammed and stolen as to subtract value. The fact that so many projects have failed to deliver value should be humbling. Instead, for a Crypto-Keynesian, this is only evidence that the big use-case are just around the corner and that new and interesting breakthroughs are only a few months away. They believe that raising money/hiring people is evidence that actual value is about to be delivered soonish™. The bottom-up reality is that most people simply don’t care to use these systems unless heavily incentivized to do so. Usually, that means that the users are actually investors, which is a nice way of saying that the users are really in a Multi-Level Marketing scheme searching for new investors that will use the systems in the same way. Systems are really, really hard to design and are very rarely right the first time. As any startup veteran will tell you, it’s really hard to find a product-market fit and many not only try different models, but often pivot to entire different industries for that reason. Successful startups try lots of things to see which things work and iterate. The best startups find solutions to existing problems. By contrast, most of these crypto startups are trying to find problems for the crap that they want to build. What’s really scary about blockchain systems is that you really can’t iterate or change things very much without throwing out all pretense of decentralization. Thus, Crypto-Keynesians have to believe any investments they make will be correct and successful on the first try. This is simply not the reality of complex systems or markets. At best, the iterations in these projects are very slow, enormously expensive and really hard to change. Crypto-Keynesian Fallacy #3: Everyone Will Do Things Exactly As Expected The most hilarious part of Kyle’s tweet storm was saying how the NBA.com server would be the source of truth and therefore, Oracles could be trusted and smart contracts can be useful for non-digital stuff. Remember, the Oracle is the last word on the smart contract. There is no further adjudication in a decentralized setting. You can’t appeal to another judge should something go wrong. You simply have to accept whatever the Oracle says. Apparently, this enormous attack surface an Oracle adds is not really even considered. Who controls the data feed from nba.com? Can that get hacked? Would anyone in a gambling situation be incentivized to affect this feed? Say you got 100–1 odds on the Cavaliers when they were down 3–0 to the Warriors and bet $1M on the Cavs. Would you have incentive to hack the servers so you could be paid out in a bearer instrument? Would you be incentivized to bribe the nba.com IT guy? Would you kidnap the person’s family? After all, you would only need to change the Oracle’s vote. Given that the ambitions of these projects are to take over the entire sports betting market ($3B/yr), this is not just theoretical. An adversarial network requires a lot of thought about the actual incentives. In addition, the more complicated the system is, the more you have to test out what holes in the incentive structure there might be. Most of these projects are ridiculously complicated. Bitcoin, by comparison, is orders of magnitude simpler and less vulnerable in terms of incentive structure. To think that you’ve designed an incentive structure that’s correct on the first try is absolute lunacy and reflects a lack of real world experience. The usual response by a Crypto-Keynesian when faced with such questions is to refer you to some added incentives in the system like additional Oracles, bonded Oracles, or some such. It’s usually a half-baked idea that anyone can figure out holes to in 5 minutes, but that’s the normal response. There’s probably a solution to every possible attack vector, but most of the time, each “solution” creates an even bigger attack surface. Eventually the solutions get so complicated and so hard to analyze that the flaws can only be revealed by the cold, hard slap of reality. Conclusion The real weakness of Keynesianism is the belief you can get something for nothing. It’s the broken window fallacy in a macro form. Crypto-Keynesianism makes essentially the same error. Developer activity by itself means very little. Centralized systems generally don’t work very well on the ground. Incentives are really hard to get right. This is why we have so many ICOs. They are “doing something”, but not adding value. You may be raising money, you may be hiring people, you may be donating to some causes but you’re not adding value until the market buys your product or service. And no, you don’t get to brag about how “successful” you are until you’ve actually added value and not anytime before then. The real innovation was and always has been Bitcoin. Bitcoin is sound money and sound money is what allows you to preserve value over the long term. That, in turn allows people to save for large capital projects which, in turn, build up civilization. Crypto-Austrians believe this is the basis for building in the future, not in these pie-in-the-sky projects that don’t deliver value. Crypto-Keynesians believe success is simply having money. They are two separate things. You can make money by adding to civilization or you can make money by rent-seeking. The former should be applauded. The latter should be despised. The ICO boom makes clear that Crypto-Keynesians can’t tell the difference. This leads to the building products that add “activity” but fail in delivering any sort of value, in effect digging and filling ditches and pretending that means something. This is Crypto-Keynesian lunacy.
https://jimmysong.medium.com/crypto-keynesian-lunacy-16bb9193a58
['Jimmy Song']
2018-06-18 14:49:07.944000+00:00
['Technology', 'Economics', 'Bitcoin', 'Blockchain', 'Smart Contracts']
1,089
The LMS: The Moral of the Story
From researching this market for over 6 years, I’ve decided to put my thoughts on paper…I mean, today’s version of paper being an online blog. If you are interested in the future of education technology and how it will impact students, faculty and administration then you must be interested in where the Learning Management System market is going. I’ve decided to break it down by phases of the origins, to the future. From paper to online: the early players Early players in the LMS market created platforms that essentially brought courses from being completely offline with paper and pencil, to online. Resources were made available on a faculty site, grades were put online and announcements were possible. What the early players did best was make a file system that was secure for the faculty to upload resources. The cons… These systems evolved in a way that other similar technologies have evolved. They became loaded with features upon features upon features. And then that file system became loaded with folders, and folders. And then some more folders… Faculty and student reactions to using this poor and ugly system The system got so out of touch, slow, and tedious, that even for the most simple task of uploading a file or assignment became ridiculously time consuming. Wasn’t this system supposed to make it easer to access files instead of printing things out for classes to get? It was starting to not be as simple and for faculty, some decided enough was enough, and they moved to other platforms or got rid of the system entirely. This is where the market started to shift and a few new players arrived. The incremental system As a phase II of the LMS market, Canvas by Instructure has done good things but mainly has been incrementally better than the previous legacy systems. Canvas by Instructure is a cloud based learning management system with additional grading and analytics features. The platform is in my opinion a little better face to what Blackboard, D2L, Moodle, or any of the other legacy players have created. What Canvas did best was bring the LMS to the Cloud. They tout about a better gradebook, but in actuality, it’s pretty much the same as any other legacy LMS. I call them the Blackboard 1.5. What really differentiates Canvas from the other players is their marketing. They are good at selling and making things sound much better than they actually are in my opinion. And with that marketing, they were able to do something no one thought was possible — take over Blackboard market share. From 2008 when they were founded to 2010, they had zero contracts. However, from 2010 in their first contract signing to 2015, they went from zero to over half a billion in value. Goes to show that you don’t have to do much to overtake the competitors in the market. If you have a slightly better product and good marketing, you can win. The next generation is social It’s clear with the technology we use like facebook, twitter and linkedin that social media has taken over the modern interface. Messaging apps and social media are norms in the real world but in education, it seems that we are still around 10 or 15 years behind the game. What I expect is a move to social media, mobile and apps within the LMS. It’s normal for students and faculty who utilize these applications on a daily basis outside of education. Why not have them in our educational lives? This is quite similar to a few revolutions in technology over the past decade. Here’s some examples that can show how markets have previously been disrupted and how it relates to education. iPhone vs Blackberry Blackberry was the market leader of the phone market at one time. What Blackberry did best was provide an all-in-one solution with a great keypad and messaging app. Problem was that they built everything in-house. Meaning, it was very difficult to integrate applications outside from Blackberry itself. And they had higher costs than new comers like Apple since they were the one’s developing all the apps rather than the community doing it for them. The phone of apps Apple revolutionized the market not with the best core applications but with the combination of the core applications it had (i.e. Phone, Messaging, Safari, Music, etc) + Touch Design + App Store. The App Store is really what took the iPhone to the next level and conquered the phone market. The App Store allowed for any individual to customize their phone with applications they wanted. It allowed for a community to develop and make the iPhone that much better, every time. And guess what, Apple didn’t have to pay for this to occur. They could literally sit and watch as their ecosystem grew like wildfire. Before you knew it, they had many more apps than Blackberry could ever have. Apple created a new category. An app phone. Airbnb vs Hotels You’d think with an industry that has been around for a very long time, that it would be impossible to break into it. I mean, that’s what the investors say… Your home, anywhere But with Airbnb, what they did best was prove the naysayers wrong. Hotels are often expensive and do not showcase a local vibe that many want when they are traveling. Airbnb provided a low cost and fun experience to gain the local culture of a community. For the first time in this market, a company quickly took over the traditionally stagnant. And by doing so, they created a new category. The sharable home. Uber vs Taxi’s Hailing a cab will soon be a thing of the past Anyone 5 years ago would have said that Taxi’s were just annoying but we just ‘had to deal with them’. The guys at Uber thought differently and said why not shake this up and make it mobile. Basically what uber did was made the taxi go mobile and made it more comfortable and convenient for drivers and passengers to get together. They created a new category. The mobile taxi. Moral of the story All of these markets are breakable. Just because a market has been around for awhile does not make it impossible to break into. I firmly believe that the LMS market will be disrupted as well with a new category. The social learning platform. Where’s the community in online education today? Some things yet to be done in this LMS market: Connecting the campus community to the LMS Opening up the community to create applications into a single platform Having a mobile ready platform (*not just an app as a checkbox for an RFP) And, a framework based on a social network (i.e. community) These are the fundamental differences in the LMS of the future. Imagine a platform that can make that happen. It’s just a matter of when this happens, not if.
https://medium.com/notebowl/the-lms-the-moral-of-the-story-a11d8f601d63
['Andrew Chaifetz']
2017-08-23 02:07:06.219000+00:00
['Startup', 'Edtech', 'Education Technology']
1,090
Lightning Nomads : The future of Bitcoin Payments!
These amazing friends are connecting hodlers and merchants across the world making the dream of crypto currency a reality via the Lightning network. Founders of Lightning Nomads: Naser Dashti & Jake Senn Lightning Nomads mission is to grow a community of Bitcoin merchants that accept payment via the lightning network and consumers who are willing to make purchases with Bitcoin to make Bitcoin a truly global currency. There is a closely held historical relationship between people participating in the cryptocurrency evolution and digital nomads. The crypto community is naturally decentralized and distributed across the globe because nationality doesn’t predict who will be attracted to new technology and ways of thinking. So, as people begin looking for a community of like-minded individuals, they have to push past borders, getting to know people around the world, attending conferences together and sharing knowledge. The lightning Bitcoin map across the world⚡ Merchant Maps Show You Where to Spend Your Bitcoin: which inturn serves as a network map to Lightning Nomads to expand their gateways Interactive online maps are the most intuitive way to find anything these days, including merchants that are willing to take your crypto for whatever they sell. And while Google Maps can help you locate a few businesses dealing with cryptocurrencies near you, other platforms are far more specialized. Coinmap.org is one of them, as its website allows crypto companies to share their coordinates for free. The map displays around 16,000 venues around the world that accept cryptocurrency payments. Users can filter these entries by multiple categories such as shopping, café, food, grocery, lodging, transport, sports, and nightlife. It will also show you ATMs where you can withdraw digital coins. Bitcoin hotspots across the planet The Lightning Network is a “layer 2” payment protocol designed to be layered on top of a blockchain-based cryptocurrency such as bitcoin or litecoin. It is intended to enable fast transactions among participating nodes and has been proposed as a solution to the bitcoin scalability problem. Whether you’re most comfortable contributing time to help expand our network of ⚡ merchants or Sats to help us grow, we need you on our team. All spending will be accounted for supporting our mission. The QR code is a lightning wallet address. Make sure to send donations from a lightning wallet. Lightning Network: Scalable, Instant Bitcoin/Blockchain Transactions for the Future Instant Payments. Lightning-fast blockchain payments without worrying about block confirmation times. Security is enforced by blockchain smart-contracts without creating a on-blockchain transaction for individual payments. Payment speed measured in milliseconds to seconds. Scalability. Capable of millions to billions of transactions per second across the network. Capacity blows away legacy payment rails by many orders of magnitude. Attaching payment per action/click is now possible without custodians. Low Cost. By transacting and settling off-blockchain, the Lightning Network allows for exceptionally low fees, which allows for emerging use cases such as instant micropayments. Cross Blockchains. Cross-chain atomic swaps can occur off-chain instantly with heterogeneous blockchain consensus rules. So long as the chains can support the same cryptographic hash function, it is possible to make transactions across blockchains without trust in 3rd party custodians. How it Works The Lightning Network is dependent upon the underlying technology of the blockchain. By using real Bitcoin/blockchain transactions and using its native smart-contract scripting language, it is possible to create a secure network of participants which are able to transact at high volume and high speed. Bidirectional Payment Channels. Two participants create a ledger entry on the blockchain which requires both participants to sign off on any spending of funds. Both parties create transactions which refund the ledger entry to their individual allocation, but do not broadcast them to the blockchain. They can update their individual allocations for the ledger entry by creating many transactions spending from the current ledger entry output. Only the most recent version is valid, which is enforced by blockchain-parsable smart-contract scripting. This entry can be closed out at any time by either party without any trust or custodianship by broadcasting the most recent version to the blockchain. Lightning Network. By creating a network of these two-party ledger entries, it is possible to find a path across the network similar to routing packets on the internet. The nodes along the path are not trusted, as the payment is enforced using a script which enforces the atomicity (either the entire payment succeeds or fails) via decrementing time-locks. Blockchain as Arbiter. As a result, it is possible to conduct transactions off-blockchain without limitations. Transactions can be made off-chain with confidence of on-blockchain enforceability. This is similar to how one makes many legal contracts with others, but one does not go to court every time a contract is made. By making the transactions and scripts parsable, the smart-contract can be enforced on-blockchain. Only in the event of non-cooperation is the court involved — but with the blockchain, the result is deterministic. Watch how common marketers, barbers and tattoo artists use Bitcoin via the lightning network: A barber from the Lightning Nomads community uses the Lightning network to get his fees in Bitcoin A tattoo artist from the Lightning Nomads community uses the Lightning network to get his fees in Bitcoin Bitcoin Coffee via the lightning nomads network Lightning is the future of Bitcoin: why? In a nutshell, the lightning network allows participants to transfer bitcoins between one another without any fees using their digital wallets. Payment channels are created between the two users so that they can transact with each other — in other words, off-chain transactions. Lightning network is another layer added to Bitcoin’s blockchain so that it can process micropayments between participants. The goal of the network was to create channels in which payments could be made between users without any fees or delays. By allowing the transactions to be done off-chain, the processing time and the number of transactions done via the on-chain network would be improved. An example of workflow of Bitcoin transactions via lightning It solves Bitcoin’s transaction fee Problem upto a viable extent : How and why? Lightning Network is often touted as a solution to the problem of bitcoin’s rising transaction fees. Its proponents claim that transaction fees, which is one of the direct consequences of Bitcoin’s clogged network, will come down after the technology takes transactions off the main blockchain. But bitcoin’s congestion is one among several factors that influence its transaction fees. Besides, the cryptocurrency’s fee itself is a large component of the lightning network’s overall costs. 1. Opening and Closing Channel Costs Specifically, there are two parts to their costs. The first part consists of a fee equivalent to Bitcoin’s transaction charges in order to open and close channels between parties. Although the lightning network allows payments between two parties, an opening transaction or deposit must be made via on-chain. The two parties then can process multiple transactions between each other, but once the bill has been settled, they need to record a closing transaction for the settled amount on the blockchain. 2. Routing Fees Besides the transaction fees to open and close channels, there is a separate routing fee to transfer payments between channels. Since the fees for the lightning network are quite low, in theory, it should attract more participants. However, if the fees are so low for the routing of payments between nodes, there might not be any incentive for the nodes to facilitate the payments. Also, as businesses adopt the lightning network as a method of payment, they may also charge fees. This problem contrasts with the approach being taken by other cryptocurrencies to increase their payments business. For example, Dash has free software plug-ins for merchants to download and use. Dash uses masternodes, who must have deposited 1,000 in Dash coins so that they can approve transactions very quickly. The fees for users are approximately two cents per transaction and Dash payments are available at more than 4,000 merchants. While exchanging funds on the Lightning Network, users pay truly negligible fees. They only have to pay a full network fee when they close the channel. 3. Remaining Online At All Times Makes Nodes Susceptible Nodes on Bitcoin’s lightning network are required to be online at all times in order to send and receive payments. Since the parties involved in the transaction must be online and they use their private keys to sign in, it’s possible that the coins could be stolen if the computer storing the private keys was compromised. However, cold storage of coins, which is considered the safest method for storing cryptocurrencies, is possible on a lightning network. 4. Offline Transaction Risk Going offline creates its own set of problems on the Lightning Network. According to Dryja, it is possible for one of the two parties from a payment channel to close the channel and pocket funds while the other is away. This is known as Fraudulent Channel Close. There is a time period to contest the closing of a channel, but a prolonged absence by one of the parties could result in the expiry of that period. 5. Malicious Attacks Another risk to the network is congestion caused by a malicious attack. If the payment channels become congested, and there’s a malicious hack or attack, the participants may not be able to get their money back fast enough due to the congestion. According to Dryja, “forced expiration of many transactions may be the greatest systemic risk when using the Lightning Network.” If a malicious party creates numerous channels and forces them to expire at the same time, which would broadcast to the blockchain, the congestion caused could overwhelm the capacity of the block. A malicious attacker might use the congestion to steal funds from parties who are unable to withdraw their funds due to the congestion. 6. Bitcoin’s Price Fluctuations The advent of Lightning Network is also supposed to herald Bitcoin’s viability as a medium for daily transactions. Customers are able to open payment channels with businesses or people that they transact with frequently. For example, they can open payment channels with their landlord or favorite e-commerce store and transact using bitcoins. However, Bitcoin still has ways to go before gaining mainstream traction. The increase in its transaction volumes is largely attributed to a rise in trading volumes. In other words, Bitcoin’s popularity is a double-edged sword since the increased attention garners investment but also attracts more traders increasing the volatility or price fluctuations in the cryptocurrency. The price volatility makes it challenging for companies to use Bitcoin as a method of payment when pricing their products to sell to their customers or to purchase inventory from their suppliers. Bitcoin Fluctuations in 2021 For example, let’s say a company has to pay an invoice to their supplier of bitcoin. Typically, suppliers give their clients time to pay, such as 30 days. If bitcoin’s price has increased by 10% during the 30 day period, the business has to come up with another 10% worth of fiat currency or another cryptocurrency to convert to Bitcoin and pay the invoice to pay the supplier. This exchange risk exists because the business might be paid by their customers in a fiat currency and not Bitcoin. The exchange risk also exists for consumer transactions since the salary or wages for most individuals are not paid in Bitcoin, leading to transactions being converted from a fiat currency to Bitcoin. As a result, the overall effect of Lightning Network on reducing Bitcoin’s transaction fees and building scale may be limited since the crypto has yet to be adopted as a method of payment. The Future of Bitcoin’s Lightning Network There remain challenges with Bitcoin’s Lightning Network and its ability to boost scale while simultaneously lowering transaction fees. However, the technology’s core team has incorporated new use cases and has been researching additional features. As a result, there have been significant developments that are due to improve the network in 2021 and beyond. Larger Payments via Lightning Network Lightning had initially limited channel size to a maximum of 0.1677 BTC but in 2020, it was announced that the constraints will be removed so that clients can have larger channels. These “Wumbo” channels are designed to increase the usage and utility of Lightning Network for consumers and businesses. Larger payments too can be processed via lighting networ Crypto Exchanges One of the most promising initial use cases to emerge involves cryptocurrency exchanges. In December of 2020, Kraken exchange announced that it will begin supporting Lightning Network in 2021. At first, only withdrawals will be allowed as they get systems acclimated, but payment channels may become possible so that Lightning transactions can be done directly with the exchange. Watchtower Watchtowers are third parties that run on nodes to prevent fraud within Lightning Network. For example, if Sam and Judy are transacting and one of them has malicious intent, they may be able to steal the coins from the other participant. Let’s say Sam and Judy put up an initial deposit of 10,000 bitcoins and a transaction of 3,000 has taken place in which Sam purchased goods from Judy. If Judy logs off her system, it is open to possible fraud. Sam could broadcast the initial state, meaning they both get their initial deposits back as if no transactions were done. In other words, Sam would have received 3,000 BTC worth of goods for free. This process of closing the channel based on the initial state versus the final state in which all of the transactions have been done is called fraudulent channel close. The watchtower or third party can monitor the transactions and help prevent fraudulent channel close. The Bottom Line The Lightning Network is an ever-evolving concept that is likely to make a significant difference to Bitcoin’s blockchain. However, the network might not be the solution to all of the challenges facing Bitcoin. Also, as new changes and improvements are made to the network, there’s the potential for new problems within the cryptocurrency’s ecosystem. Much will depend on the research and development of new technology in the future. Connect with Naser on Instagram for the Best tips on earning bitcoins ! Naser’s short videos on earning Bitcoins is definitely worth it!
https://medium.com/@surajbeera/lightning-nomads-the-future-of-bitcoin-payments-3bcec408f668
['Suraj Beera']
2021-08-23 08:01:44.388000+00:00
['Payments Technology', 'Lightning Network', 'Startup', 'Bitcoin', 'Cryptocurrency']
1,091
BitCoin’s Gone Green. BitCoin will be the most…
BitCoin’s Gone Green Dedicated to: Jerry Chan, CPO of TAAL, who’s knowledge of BitCoin Node interworkings is only eclipsed by his modesty (and who contributed a nifty idea of saving a lil’ bit of power via NONCE-reuse as well as better explaining BitCoin Network relay fees) You’ve seen the complaints and criticisms: “generating Bitcoin requires a truly staggering amount of energy. The electricity used in a single Bitcoin transaction, for instance, could power a house for a month.” — Adam Jezard, World Economic Forum “it’s impossible for 98 percent of the devices during their lifetime to make the calculation that actually results in a reward. So, the rest are just running pointlessly for a few years, using up energy, and producing heat, and then they will just get trashed because they can’t be repurposed. It’s insane.” — Alex DeVries, Blockchain ‘expert’ “the amount of energy needed to run the Bitcoin network annually has surged to a record-breaking 77.78 terawatt-hours. Roughly the same as the entire electrical consumption of the nation of Chile” — Digiconmist These statements are both true and false, depending on the property values you assign to two seperate aspects of BitCoin: Age: Young “baby” BitCoin (2020) is much different from older BitCoin at scale (2030, 2040, beyond), and this article’s purpose is to spend time on the one we talk about less: BitCoin at scale, or old-man BitCoin Fork: The answer on power consumption is different for each fork of bitcoin, which today is mainly comprised of (in order of market cap) btc (“bitCorn”[0]), bch (“abc”), and BitCoin (original protocol, or BSV). BitCoin will be the most power-efficient computation AND payment network in the world — bar none. BitCoin is truly an intricately-engineered information commodity, and this article aims to show you just one aspect of this beauty — the part of it’s design which will conserve energy. While explaining BitCoin’s masterful qualities which allow it to save energy, this article will shed a little more light on why today’s highly-speculative “digital gold” punters (regardless of their namebrand popularity) have missed the mark by choosing the wrong bitcoin fork. If you read “3 Wrights don’t make a Wrong” [1] you’ll understand this theme: how so many saavy well-known successful folks can be so wrong for so long. “The power of BitCoin is a curious thing; make segwit fork weep, make a visionary fork sing. Remove some OP_CODES, kill the fruit of the loin; more than a hash-rate, that’s the power of BitCoin.” — Huey Lewis & the News (Power of Love) “Doc” is also a nickname Ken Shishido refers to Dr. Craig Wright — a theme we’ve seen before in London. The “flux capacitor” in BitCoin is transaction-verification; the DeLorean’s flaming tire-tracks is just the “PROOF of work” It’s worth nothing a system which creates the most efficient worldwide computing network — for information, art, and progress — is a gift Satoshi Nakamoto gave us all back in 2009. It’s about time we understand BitCoin’s elegance with respect to conserving valuable energy. For awhile Americans couldn’t “traverse the plains like a man” (see Aaron Freeman video below), using dirt roads & mountain passes, then railroads added orders of magnitude improvement in transportational efficiency with speed security and tunnelling thru mountains. This historical precedent is much like what the BitCoin Network will do for our current world wide web (WWW) internet network. Before we talk about how BitCoin saves energy, like railroads saved energy vs the horse-and-buggy wagon trains of Lewis & Clark, it’s best we start by narrowing our focus via process of elimination. First we’ll dismiss “cryptocurrencies” which will NOT save any energy and are inefficient NETWORKS (networks is a distinct sub-topic of technology vs CODE) as they are scaled. In other words, we’ll start with “Fork” first, and then we’ll analyze the only scalable fork of BitCoin to discuss why “Age-ism” has blinded the “bitcoin wastes energy” critics. Finally, we’ll describe BitCoin’s “Proof-of-Work” in the most clear way possible. The mission of this article is two-fold: Show the reader how BitCoin Satoshi Vision will be the most energy-efficient computation network ever. Leave you with unparalleled understanding of “Proof of Work” as it pertains to BitCoin. After reading this article you’ll never think of “PetaHashing”, “Hashpower”, or “POW” the same way again, and if you currently think of btc as bitcoin, you’ll begin to understand why it has no hope of such a vaunted title. BitCoin SV is a baby, but that doesn’t mean we can’t analyze it’s young knowable traits to predict an accurate fortelling of it’s energy efficiency as an older man. The kid’s future is bright! btc “POW” [2] is a waste of energy btc (otherwise known as “bitCorn”[0] — the one breaching $20,000 per bitcoin on “digital gold” dreams) does not have a proof-of-work because ASIC-hashing isn’t work. Work’s definition even mentions what outcome from the energy spent is required… “USEFUL” [1]. Let’s say that again, ASICs hashing, despite the “proof of work” moniker, isn’t work. This is actually the main point of this entire article, but we’ll get to that in finer detail later. btc still doesn’t play its POW guitar like a man — but BSV does with both big blocks AND scale. Playing amazing guitar is the work, the proof is selling sold-out concert seats. btc’s mistakes (plural) began way back in 2010 when Satoshi Nakamoto made what ended up being a poor choice of steward when handing over the code repository “keys” of BitCoin to Gavin Andreson. You can sympathize with the man, he was under the wrong impressions that: Everyone understood his vision His protocol would not be changed. When the Wright brothers, or Karl Benz, or Thomas Edison revealed their WORKING inventions to the world, they could be taken apart and examined to understand the specific functions each part had in the whole. This is MUCH easier in physical inventions of things like phonographs, moving picture projectors, and even invisible electricty (especially Edison’s DC power). But the most amazing parts of Satoshi’s invention weren’t the code or the cryptology or the math — it was the incentive system, the understanding of networking topography, and most of all the vision for how his system would GROW. Satoshi’s vision was akin to God creating DNA for a species before ever SEEING the adult version of the species! The point here is, sometimes a man is so many steps ahead of the herd in his thinking that he doesn’t realize how hard it is for people to catch up without explaining everything. He explained a little bit for sure, but not enough. So a coders-only mono-culture took over a contraption that required more than just keen coding knowledge. You might be a whiz with internal combustion engines, but that doesn’t mean you know how to fly the airplane it propels — or even understand how the flight controls work! Gavin, under some not-so-innocent pressure from early bitcoiners with hidden agendas, subsequently was duped into giving copies of repository keys to Gregory Maxwell and other members of “Core” who then quickly rigged bitcoin for the sole purpose of increasing it’s price. This “digital gold” narrative isn’t surprising; rigging an entity for near-term price increase actually happens often in the stock world! It’s why GE and GM are drains on citizen-taxpayers via To-Big-To-Fail (2B2F) bailouts. [2] Jack Welch and several GM CEOs ran those great brands (ie: GE was Thomas Edison’s baby — a guy in the running for America’s greatest man) into the ground in order to pad their own pockets in the NEAR term, instead of thinking about the long term. Similarly, the $25,000 price of bitcoin is unfortunately a perfect indicator of how bitcoin is being run into the ground, just as AOL’s record-breaking acquisition by giant Time Warner in 2000 was a perfect indicator for why “walled garden” internet protocols would not reign, and the real winner was going to be Tim Berners-Lee’s world-wide-web (WWW) internet protocol born in 1990. Notice the similarity of the timeline as well! Big Wall Street and corporate bigwigs bet on AOL, the wrong horse, and the peak of that mal-investment curve occured just as a still young 10-year-old WWW was about “run the table” on internet protocols. In 1994, Prodigy, CompuServe, and AOL were a big deal, and AOL emerged the winner of that group of losers by 2000. Then the worldwide web (WWW) took over for good. AOL’s meteoric rise and fall is great historical precedent for the mistakes of btc’s core team who mis-interpreted the 1990 protocol in the same way “Core” misinterpreted Satoshi Nakamoto’s 2009 protocol. Voila! Currently it’s 10 years after the invention of the protocol (2009–2020) and the same thing is about to occur where all the Wall Street and Silicon Valley glitterati are missing the REAL BitCoin TRAIN — AGAIN. Ain’t historical precedent fun? https://sym.re/Gd9JA2o (read for more fun historical precedent going back even further than AOL+Time Warner) The “Corn” team removed the functionality of the scripting language (akin to removing the wings from the plane), solidfied the 1 MB block limit, then began taking away signatures (SEGWIT) and setting up virtual money-laundering machines with “Lightning Network”. They created a rocket which could soar, but couldn’t land the people riding it safely. bitCORN removed the scripting language functionality from bitcoin- similar to removing the wings and landing gear from an airplane. The jet engines can take you up into the heavens, but it’s the downswings when we find out what an asset is REALLY worth. btc is a one-trick pony, all it does is run ASICs competitions in which Miners fight for the ever-halvening subsidies. 50, 25, 12.5, 6.25. These are the block subsidies which took approximately 12 years to get to the 6.25 point. There’s only three “outs” for btc miners to continue to process transactions profitably as time goes by: the price of btc must double before or immediately after the next halvening drops the subsidy. A miner with a fixed amount of ASICs miners burning $125,000 of energy a month will be break-even at btc price = $10,000 and block subsidy = 12.5, winning 1 block per month. After the halvening, the miner will have a negative 50% profit margin, as the payout will be 6.25 x $10,000 = $62,500 — not enough to cover his $125,000 energy cost. BUT, if btc’s price doubles to $20,000, the ponzi scheme can continue! btc’s mining fees could make up for 100% of the lost subsidy from the halvening. In our example above, and using 1 MB blocks at about 7 transaction per second (7 txns/sec * 60 sec/min * 10 min = 420 transactions per block), if mining fees were $148 per transaction then btc price could remain at $10,000 & the miner will be fine! This is considered a “feature” of btc — “digital gold” but only for rich people who can afford massive fees! These exhorbitant fees would need to climb again each halvening: $300, $600, $1,200 per transaction as subsidy reduces to 3, 1.5, and lower! The final solution is the most energy efficient: let everyone throw half of their energy-burning ASICs mining rigs in the trash! The puzzle difficulty will adjust, and miners would actually reduce their energy consumption in our example from $125,000 to $62,500 per month to maintain profitability consistency. Who wants to throw their capital into the volcano first? ASIC miners not only waste energy, but then they are WASTED when disposed; Gordon Moore planned for disposable chips but after 18 months?? The btc core team have embraded 1 and 2 above, as 3 seems a bit silly for anyone investing capital. So lets review the logical outcomes for each of their “solutions”: Every halvening, btc will need to double in price. $20,000 is good for 6.25 subsidy, then $40,000 at 3.125, $80,000 at 1.50625, $160,000 at 0.753125, $320,000 at 0.35250625, $640,000 at 0.1526…., $1.2mm at 0.075…, $2.4mm at 0.03525…, $4.8mm at 0.0152….. At $4.8mm btc price we’re at $100 trillion (US monetary base is only $5 trillion). Eventually this number gets absurd, and miners will be forced to adopt one of the other choices Forever-doubling per-transaction fees is even more absurd, but this choice is exactly the one btc core team would have you believe. What they’ll allude is that “digital gold” will eventually just become tradable between nations for very large sums, thus these nations will have no problem paying $10,000 and higher per transaction for a simple movement of 200 Bytes which costs a computer a tiny tiny fraction of a penny to transmit and recieve. Remember, less participants, less transactions, and less transactions means an even HIGHER fee is needed! Core-lovers really don’t like to do math or think too far into the future. DIGITAL GOLD NUMBER GO UP SOON! All roads lead to just shutting down unprofitable ASICs mining rigs, when you examine the options above. This would work, and price could stabilize. There’s just ONE big problem with this solution, what happens when btc price takes a swing downward similar to what oil did in March (negative $40 per barrel in West Texas) or gold did from 1980 to 2001 ($800 to $200)? Oil can run cars and provide heat, and gold can be used as excellent circuits, non-oxidizing artwork, bigger wedding rings, or anti-bacterial paint. What do you do with cheap btc to eat up some supply and make the price stop dropping once a price drop occurs? Please watch this video on this very topic: https://streamanity.com/video/E0IXSKs2hrUK To evaluate energy conservation in btc is easy tho, all energy goes to ASICs hashing which is solving a puzzle with no value to society. Regardless of how btc solves its halvening problem, it is ALWAYS just burning energy doing a pointless task. Think about ASICs hashing when blocks are stuck at 7 txns/second (~0.5% of average Visa levels) which a single personal computer can handle easily and the mining subsidy is 0.001 bitcoins per block? What is the energy getting the miner at that point? It’s not a “proof of work” for the newly minted coinbase coins, because there just aren’t much of those. The ASICs aren’t processing transactions, because just ONE of them could handle the whole network. The energy is COMPLETELY WASTED no matter how you think about it. Ethereum cares only about how much shit you have stacked, not how much shit you’re going to compost in the future! ethereum 2.0’s “POS” will prevent the ether network from scaling ONE of the flaws in proof-of-stake [POS] cryptocurrencies is its inability to scale. Scale requires investment in networking equipment which can process transactions. No one just DONATES equipment while not getting paid for it proportionally to the transaction expense! VISA and Mastercard both invest billions in processing-machines, to handle all the traffic on their payment network; but, they are paid PER transaction to ensure all runs smoothly. However, this natural incentive for Visa/MC doesn’t exist for the largest block-reward earner on Ethereum, due to the very definition of Proof-of-Stake! The spoils of ethereum’s inflationary coin payouts go to the guy who already owns the most, not the guy who processes the most transactions correctly! Visa’s NET “property & equiptment” investment is almost $3 billion in 2020; how much will the biggest POS ethereum processor spend on processing equipment? Consider a hypothetical example involving Visa. Instead of investing in their own network equipment to make sure your Xmas presents arrive securely and on time before the holidays, Visa instead pays TWO third-party suppliers to do transaction processing FOR them. They choose to outsource to EtherFat Processing LLC (EFPL) and NoEtherFastTxn Inc (NEFTI). Instead of Visa paying each based on how fast, cheap and secure a company processes Black Friday transactions, they pay based on which company has more dollars in the bank. EtherFat has $9.999 billion in the bank, and NEFTI has $1 million in the bank. SELLC is a very responsible forward-looking Node who prepares well for the Xmas transactional rush, so it’s CEO spent $9.999 billion on processing equipment to be able to handle 80,000 transactions per second. Meanwhile, EFP “prepared” for the holiday rush by spending $1 million on some Rasberry Pis which can handle 50 transaction per second. NEFTI will be able to process all the transactions no problem, but EFPL will be hamstrung and begin raising prices on transaction fees not because they aren’t making tons of money, but because they will be incentivized to LOWER the number of transactions on the network so their tiny Raspberry hobby-computers can handle it. If only the great coder-geniuses of Ethereum figure out how to route ALL transactions above 50/second to NEFTI, Xmas will be fine this year! But that’s now how POS works! Proof of Stake pays the processor based on what’s in the DOLLAR bank, not in the SERVER bank. So Vitalie Byutes routes all the traffic to the responsible poor node who does all the work, and then routes all the fee-rewards to the rich node who’s loafing on the sideline while counting his ethereum HODL-coins [2]! One guy has stacked sats; the other guy has stacked transaction processing machines There’s no real reason to go any further with Ethereum, because even when they were Proof-of-Stake they proved many times over from 2016–2020 that scale was not occuring. In Physics there’s theoretical and experimental, and experimental ALWAYS trumps theoretical. We’ll leave it to the reader (or the comments) to collect all the mishaps in Ethereum’s ignominious history of not being able to scale. There’s simply not time to go thru the thousands of forks and/or copycats original BitCoin; the main POW and POS cryptocurrencies covered above will be sufficient. The boring part of our article is covered, it’s time to inspect the DNA of original BitCoin — BitCoin Satoshi Vision — to see if it will indeed be the most energy-efficient computer network ever built!? Satoshi Nakamoto set the BitCoin DNA (protocol), but the organism which is the BitCoin NETWORK builds itself according to input from the surrounding environment. Satoshi knows not exactly what will be built finally, but has a better idea than anyone for what qualities it might have. A whole season championship won every 10 minutes Crash and burn. All the stars explode tonight. How’d you get so desperate? How’d you stay alive? Oh, come on be alive again; don’t lay down and die. Oh baby drive away, to Malibu — Hole (Courtney Cobain) It was the biggest day I’d ever seen; the east coast had its nor’easters with all it’s chop and some rare clean big days after those storms, but California-Pacific waves had a LOT more wieght to them for the same height. A 2 foot high wave in Ocean City, Maryland, Manasquan Jetty, or even way “out east” on Long Island at Ditch Plains wasn’t enough to propel a grown man down the beach on even a 9 foot longboard. But a 2 footer in front of the Surfer Motor Lodge in Pacific Beach, if mishandled, could break your neck in the shallow water if you weren’t aware of the extra UMPHH it had. But this was no 2 foot day, it was “8 to 10” at the surf report. On a point break like Malibu, 8–10 feet is actually handlable. You can get outside by staying way left of the incoming sets, and then later traverse northwards into the drop zone. But Malibu peer is a point break, and I was looking at a beach break with waves everywhere. There was no easy avenue to get out to a safer deeper area beyond the break of the incoming sets — you had to WORK for it. I was still a novice surfer in the fall of 1994, but I was also 23 years old in peak shape. Swimming & crew were my main sports growing up, and I had some experience rowing surf boats in New Jersey, and this made me highly overconfident in the water. I was ignorant. In surfing, rides are the block rewards, and paddling is your capital investment. So young, strong and overconfident back then, I didn’t even stop to consider that paddling a 9'10" longboard into 8–10 feet of OCEAN break was a tall task. I just jumped in and aimed straight out. 15 long minutes later, I’d gained about 6 yards. 30 minutes later, I had finally made it out beyond the constant break of whitewater, and only because I wasn’t taking no for an answer. I screamed at the waves, never stopped paddling, and used every trick I’d learned reading old-school surfing books from the 50s and 60s in the local library. Failed duck dives (on a long board? yeah right!), roll-overs, and I even tried the “stand up ride-over” method I noticed Hynson and August used in Endless Summer (at Ins & Outs). I only remember two things from that day. I almost drowned on the first wave I took. I was exhausted and didn’t respect it, so after finally getting out there, took the first sexy 8 foot wave and was held under longer than ever in my life. I vividly remember giving up trying to get to the surface and breathing in anyway. I sucked in foam luckily, instead of water, which is probably the only reason I didn’t die. After spending another 30 minutes getting back out (yep, so young and ignorant, I didn’t even respect the conditions were far above my abilities) against never-ending set waves, I saw a lull behind me. For about 5 minutes the water between me (outside in the drop zone, exhausted) and the beach was completely placid — you could skip a stone a dozen times without hitting a ripple. What the hell was that I thought? Hop over this last wall of whitewater, bc behind it is a “lull in the action”, a doldrum which allows old guys to conserve energy. Over the next several years or so, I noticed this “break in the action” phenomena over and over again, and one day in my late thirties when tired and out of shape, I had no choice but to use my WIT against the ocean to get “outside”. By then I was a decent surfer who’d handled 15–18' California waves (which means like 6 to 10 feet in Hawaii!), with lots of wave-watching experience. So like any experienced surfer, I waxed my board with my eyes open looking for that break while I watched a young 23 year old use muscle and determination to battle countless walls of whitewater. I only smiled wisely, examining youth and ignorance, and then I saw the break in the action. I wasted no time, going into full sprint mode, paddling crisply thru flat water at a nice clip. In 2 minutes I was outside, sitting on my board, a bit winded but not exhausted and sapped. the 23 year old beat me to the drop zone by maybe a few seconds. While he took a quick breather, I was on the first wave. Mind over muscle; experience over a wealth of health. Wisdom is an asset in surfing. After the learning experience described above, I tended to use probability, topograpy, game-theory, and sociology in the water at Malibu on crowded days. At one point my surfing was almost auto-pilot — I didn’t think I just reacted to every person, wave, and current around me. You’d have to pay me good money to reveal the secrets of how to get a wave at the “Bu” — it took 2 years of constant attention and failure to learn tiny little tricks — in surfing we call it “getting a break wired”. Before I moved back to the east coast from Pacific Palisades California, I could net a wave every 10 or 15 minutes at the Bu with 80 people in the water on a Saturday with Pamela Anderson on the beach watching her teenage son in the water without even cracking the 50th percentile in paddling speed. These are surf allegories to highlight how the age of BitCoin Node Network changes it’s behavior. Pretty normal day at the ‘Bu. This is the first wave of a set & the smart old guy on the right goes right past this wave AND the next one to get wave #3. Here we see a level of competition similar to BitCoin Nodes fighting for a big block of public transactions! BitCoin right now is just a baby. It only seems old in 2020 because BitCoin was born in 2009. But realistically BitCoin kinda drowned young in 2011 or so, and was revived from a coma in late 2018 by Calvin & Craig as BitCoin Satoshi Vision. That repair-work is still ongoing as we wait for Chronicle to solidify the protocol once and for all, but we’re about to take the diapers off as Nodes shift from fighting for 6.25 or 3.125 bitcoin block subsidies to 0.2 satoshis per Byte transaction fees. You see, BSV isn’t going to need the price to go up, or fee prices to climb, or turn off servers to save energy. BSV is built on BIG BLOCKS, the bigger they are, the more money the block-producer earns. The energy cost of competing is paid for by SCALE, and scale makes the physical network bigger by increasing the annual block reward pool. While the halvenings take away the block subsidy, bigger blocks increase the amount of revenues one can win inside 10 minutes, or 1 year. Blocks are the same in number every year, governed to be 10 minutes apart; but, there’s nothing in the protocol which says they can’t get amass more and more transactions which increase the total fees per block. Forget “Moore’s Law”, Gordon Moore’s TRUE genius was realizing that Intel could lower price and sell more chips at small margin but increase the market for chips by making personal computers cheaper! He wielded this little bit of wisdom like a club over his competitors in the early days of CPUs. The 8086 and other early Intel CPUs weren’t the best architecture — the architecture was “good enough” and instead Intel focused on lowering price and winning via SCALE! BitCoin Nodes are like young surfers now, relying on all muscle and ignorant grit; but, as they gain experience, and overcome a few near-death experiences, they’ll accumulate little “tricks” to exploit. Energy-saving tricks! ASICs mining is ignorant, it’s just a matter of money and maybe getting a good price on energy costs. Transaction processing is hard, it has complex nuances — customers for data transactions and microtransactions will have specific needs based on their business. Some Node customers need speed, some will need precise time-stamping, and still others will need cheap prices. Depending on what happens with battery technology and renewables innovation, electricity/energy costs may not even remain the dominant concern in Node profitability. Just like the young surfer no matter how strong cannot possibly see all the conservation tricks, the current subsidy-centric bitcoin Nodes cannot see what they will need to win blocks as the contest shifts from a contest of strength to one of guile. What we can bet on with certainty, is that running a contest every 10 minutes ensures that all Node players will use every trick they can conjure to lower energy costs. This is the incentive system Satoshi gave us with his protocol; forget the code or r-puzzles, built-in incentives are the sexiest thing in BitCoin! What we’ve just shown is not a concrete current evidence why BitCoin will be more energy efficient than say Amazon or Google Cloud; we’ve simply shown that BitCoin’s protocol creates an incentive system which rewards efficiency with power when processing transactions. We’ve more talked about what makes BitCoin energy efficient via it’s DNA, or protocol. It’s centered on big blocks for sure. But what about some BETTER analysis which can show us that wasteful ASICs mining will be a low percentage of cost for Nodes in the future — at scale? The AGE-ism of BitCoin Old man take a look at my life; I’m a lot like you were. Give me things that don’t get lost, like a coin that won’t get tossed. I’ve been first and last; look at how the time goes past. — Neil Young, Old Man Predicting low energy waste via profit margin analysis Margins will always be low in commodity businesses [see the book Gorilla Game “monsters” not gorillas], so if we assume this is true of BitCoin Nodes, we can make an assumption about what percentage of capital expenditures Nodes will need to spend on processing vs what percentage they’ll need to spend on puzzle solutions (ASICs). Energy-wasting puzzle-hashing should be a VERY low percentage of cost at scale, if BitCoin is to be the most efficient computer network in human history. Let’s do another hypothetical example of two competing nodes. Atlantis Asics POW (“AAP”) is a BitCoin miner in 2019 with a huge investment in fast ASICs mining rigs. They cut their teeth on 12.5 bitcoin block rewards, while transactions were maybe worth $1 in extra “tip” revenue. Let’s say BSV was trading for about $200 all year in ‘19. AAP made 12.5 bitcoin/block * $200 / bitcoin + 1 MB x 1 sat/Byte = $2,501 per block revenues. AAP was making less than 0.1% of it’s revenues from transactions, and over 99.9% of “coinbase minting” revenues from block subsidies. Thanks Satoshi! Old Guy Osiris Transactions (“AGOT”) is a BitCoin transaction processor from the future — 2040 — with a huge investment in transaction processing equiptment. Why so much investment in txn processing AGOT? Well, in 2021 Apple started using all their cloud-prowess and server innovations to compete for BSV blocks. So they entered by putting up a huge chunk of cash into transaction equiptment. AGOT reacted by specializing in a certain niche of data transactions, starting in 2021 with SLictionary definition (information) tokens which traded as if on a stock exchange. Other information-on-coin companies copied SLictionary’s exchange model and AGOT ran away with the niche even against Apple who was busy figuring out the new sector by bullying its way around. Crafty AGOT even sold all their ASICs and outsourced that function to others who specialized in it. When AGOT accumulated a lot of private localized (to them, in North America) transactions, they’d simply send their block to an ASICs specialist to get a winning hash and send the whole thing in for a splitting of the revenues. No one could beat AGOT’s cheap secure fast transaction processing in the information-token niche. Remember Visa spending $3 billion on CURRENCY-TRANSACTION equipment? Google has a net investment of over $80 billion as of 2019 in DATA-TRANSACTIONS equipment & property. Data is still growing faster than payments and is already >25x bigger. BitCoin should look like a similar ratio by 2040. Home Court Advantage: ASIC-miner, early 2020 The something amazing happened, old AGOT of 2040 travelled back in time to 2020 to compete with young AAP for blocks. How would this go? In early 2020 block subsidies were 12.5 and block sizes were 1 MB which generated an extra dollar of fees. AAP won every block handily, while AGOT got none. AGOT simply didn’t have any hashpower; whereas, that’s all AAP had. AGOT looked around and saw no BSV app companies using data tokens, so there was no business to get; they just powered down their servers and spent zero on energy costs. Whereas AAP had a big power bill, from running hashing-rigs. Even tho AAP easily beat AGOT, in 2020 they found ASICs-mining to be quite competitive and eeked out a 1% profit margin. It looked like this. Block revenues = $100 ASIC Electricity cost = ($73.99) ASIC Equipment depreciation =($25) Transaction processing/verification depreciation & electricity = $0.01 Profit = $1 or 1% In this contest, AAP eeked out a small profit while AGOT could only minimize losses by turning all their transaction processing equipment off. If BSV had gone up in price, AAP probably would have made the same 1% as many other Nodes would’ve invested in equipment to chase the price. But every dollar was spent on ASICs which perform an expensive but meaningless task. All expense dollars are effectively wasted bc solving meaningless puzzles is not an assset which benefits anyone or anything. THIS example, is why we can claim that even BSV is energy-inefficient today. But this battle was fought on AAP’s “home court advantage” of early 2020. Now we’ll turn the tables! Home Court Advantage: Transaction-Processor Node, 2040 In 2040 the information-token-exchange business is ginormous, it seems like everyone is doing it. It even eclipses commodity-bitcoin monetary transactions by 30:1 ratio. AGOT has all the right equipment to do this business, but still has no ASICs. AAP, on the other hand, was able to upgrade its ASICs for 2040 to top-of-the-line equipment, to make the contest fair. They spent ALL their money on ASICs chips. So how did it go? AAP kept winning blocks like crazy! They even won 10 in a row at one point with each win taking only 30 seconds! They were destroying the difficulty algo. Just one problem tho: In 30 seconds the transaction amounts were scant. Why? Well for one thing 30 seconds is only 5% of a normal block time, and secondly no customers were giving AAP any private transactions so AAP was only. getting some generic no-data monetary transactions. So actually their revenues looked more like they were getting 1% of average block rewards in 2040. What about AGOT? Well, AGOT immediately made low-margin deals with all their old customers to gain back market share they lost when they took a time-warp sabbatical to 2019 to please John Pitts’ wacky time-machine Node experiments. They accumulated PLENTY of “SLA” transaction volumes to build big healthy blocks chock full o’ revenues. There was just one problem, they spent all this money on transaction processing and the corresponding energy costs for such, but couldn’t win blocks to enter their private transactions without ASICs. So what could they do? Did you ever see the Gonk and Geefle on Sesame Street’s Planet of Snu? Geefle could reach the nectarines, but his arms don’t bend so he couldn’t eat them. Gonk couldn’t reach the nectarines because he was too short, but he could eat them. They formed a partnership and both ate their fill. Old AGOT and young AAP did the same. AAP couldn’t generate revenues without transaction processing, because in 2040 block rewards were effectively nothing after 5 more halvenings since 2020. (3, 1.5, 0.75, 0.375.., 0.18…). With BSV at $200 AAP was winning small revenue rewards of about $36. It didn’t matter they won a lot of blocks quickly, as they didn’t have any transactions in the blocks. AGOT couldn’t win ANY blocks, despite amassing what would have been 2 TeraByte blocks (2x10¹² Bytes) at 0.2 sats/Byte => 4x10¹¹ sats => 4,000 bitcoin * $200 => $800,000 blocks! AGOT’s transaction fees were so big, they said “keep the change” on the 0.18… block subsidy worth only about $36. So obviously a Gonk-Geefle deal was worked out between AGOT of 2040 and AAP from 2020. But what were the splits? AGOT was like Google + Visa, they had $100 billion in processing equipment running just to be able to handle the gargantuan size of the network. They had no choice. If AGOT couldn’t handle every transaction given to them by application partners, their blocks wouldn’t have any transctions and it wouldn’t have been worth it to win them anyway. Processing transactions was were ALL the money was! $800,000 vs $36. Since ALL Node businesses, no matter if they were processing transactions or hashing with ASICs, were low margin, we can imagine that energy use was proportional to each company’s task. AGOT needed tens of billions of dollars of equipment to grab revenues worth $800,000 and AAP needed a few sharp ASICs hasher-rigs to nab those $36 blocks! Ok, so maybe we forgot to add the 0.5% generic transactions to AAP’s total, so maybe $4,036 was their average revenues per block? Either way, the ASICs-hashing just doesn’t win any real money, so why would a large transaction processor spend a lot of money on it? If you don’t believe the ratios above, just think about it from Satoshi’s standpoint of “signal”. In the early days of BitCoin there were no transacions and no bitcoin price either. Thus no transaction processing expense was needed beyond a standard PC CPU to handle a few transactions per hour. Satoshi set up the system to prevent evil parties from taking over, so he created the hash puzzles to GAURANTEE each Miner had to outlay SOME capital. SLictionary does the same thing now. Most all of SLictionary revenues come from knoweldge seekers looking up a word for 1 penny fee. So why does SLictionary charge 1 penny to WRITE a definition, aren’t Word$miths doing the platform a favor? Yes and no. The “favor” is returned by SLicionary paying off Word$miths for all future transactions on that definition, but also charging a penny to define a word prevents spamming the database with nonsense and crap. Word$miths are outlaying capital, it’s small, but it’s a signal they won’t be wasteful. Twetch is the same deal — they charge 2¢ per texty Twetch to make sure no one spams their feed with nonsense without losing capital. It’s a costly signal, but in Twetch and SLictionary’s case a profitable one. In BitCoin Craig Wright decided to make the signal meaningless, because if he used something useful then gerrymandering and pandering to the benefactors could gum up the fairness of the contest. So he made it as closely reliant on just money as he could, and money BURNT or wasted. But was Satoshi interested in torching $100 bills? We’re guessing Satoshi Nakamoto hates burning money as much as we do. So maybe his system AT SCALE tends to SAVE energy rather than waste it? So if “signal” is important, what’s a bigger signal than willingness to invest $100 billion in processing equipment to compete with the 8 continental Nodes of 2040? Are the ASIC-hashers really important at that point? Sure, they signify a game which must be played, but that game will only be played for common transactions which are time-sensitive. Most transactions in 2040 won’t be so time sensitive. If there ended up being 8 giant Nodes in the world in 2040, that means each Node is winning a block about once every hour-twenty if we assume homogenous size amongst them. Furthermore, if many transactions were buying coffee or goods and needed confirmation by the merchants, would a single confirmation from an honest node, with $100 billion on the line if they cheat, be enough? I certainly wouldn’t want to ruin my reputation as a Node so I’d probably have some deals in place to have access to competing Nodes transactions without actually having revenue upside from them. By the same logic, the peasant USER also wouldn’t want to cheat, or else be thrown off the BitCoin network forever and be an Alduous Huxley-style outcast from society! Investing the billions in 2040 to even be a competitive Node IS the signal, it’s the risk outlay which ensures honesty! It’s going to be MUCH bigger than outlays for ASICs which don’t really have any great revenue-generating power on their own. Stressing a point: The higher ASICs costs are in 2040, vs transaction processing costs, then the lower profit margins will be. Since transaction processing will most likely be a commodity game with very low margins, we must conclude that all large Nodes will do anything in their power to reduce energy consumption on ASIC hashing, and CERTAINLY they will all compete on low energy costs for transaction processing. Thus, wasteful ASICs hashing will be VERY small percentage of overall Node expenditures; AND, Nodes will compete in a championship every 10 minutes based on their power consumption as one of the biggest drags. This is a MUCH harsher competition than AWS, iCloud and Google Drive fight! FUTURE CONSIDERATIONS for Nodes It’s always my intent to look forward, and not just dwell on how BitCoin works today. In fact, I’d say if you’re not always thinking of BitCoin at scale, rather than how everything looks today, you’re likely to miss things like how BitCoin Network will be the ultimate data and monetary network with respect to power conservation. It’s here I’d like to go more out on a limb and venture some guesses about what topography & localism, and private transactions will mean to Nodes in the future. This is the part of the article you should be more critical, and challenge my predictions. Private Transactions: We already saw these in late 2019, with Mempool making a private “SLA” deal with WeatherSV. BSV went from a default of 1 sat/byte to 0.5 sats/Byte in early 2020 BECAUSE of this deal’s implications. Private transactions require time insensitivity — this was described perfectly to me in Korea, by the way, by Craig himself, in a single sentence. He described perfectly how the fee marketplace would be dynamic, a month or two BEFORE it actually became dynamic thanks to Mempool + WeatherSV! As if we needed any MORE reasons (see 3 Wrights link) to verify CSW = SN. But if you think of private transactions which can confirm in 2 hours or more, there’s a LOT in that box, and those transactions will want to be as cheap as possible. So I’m expecting a LOT of private transaction agreements, and I think we’re already getting a feel for those courtesy of TAAL’s dealings with EHR Data and UNISOT among others. Why won’t private transactions be far bigger than general broadcast-able time-sensitive ones? How many patent filings come down to hours? Localism considerations: We’ll talk more about speed and localism in a coming NEED FOR SPEED article. The answer in the back of the book, I’d guess, is Nodes will specialize by geography. Apps in North America will trust a North American Node slightly more than one in China — and vice versa. This won’t likely be as drastic as geographical splits for wallet companies, which to me seem very much like banks (also highly geographic), but there should be some trust element to Nodes choices for apps anyway. Topographically tho, geography should play a much higher role in speed of transactions. If you think about MMORPG games combined with eSports prizes & wagering for instance, latency will become a HUGE issue. But that topic will be shelved until next time. But here’s some fun snippets lately… TAAL intends to use the acquired blockchain computing devices to support its ongoing operations and power BitcoinSV (BSV) blockchain transaction processing solutions for enterprise clients. TAAL expects to begin operating the devices by Q1 2021, establishing a significant operational footprint for TAAL in North America. The blockchain computing devices will enable TAAL to offer specialized services to enterprises that require large volumes of transactions processed in North America on an ongoing basis on the reliable BitcoinSV blockchain. TAAL believes that BitcoinSV provides the best blockchain network to support such specialized services to enterprises, supporting large transaction volumes due to BSV’s scaling and microtransaction capabilities. Notice TAAL’s mention of “North America” twice; it alludes to the importance of geography in BitCoin Node transaction processing. In other words, proximity matters — TAAL is expecting to dominate North American transactions by offering private “SLA” [ref #] terms to it’s customers. We don’t know MUCH about what the future holds as far as the BitCoin Network and its members Nodes are concerned, but we DO know that localism will have it’s place. This only makes sense, as localism important to network structure and always has been (are you old enough to remember that AT&T used to CHARGE a higher rate for a long-distance call vs a short-distance call?). There was once a time in the early days of the internet when ISPs (internet service providers) charged per Byte (usage), before one-size-fits-all took over. We will return to per Byte charging again, and when we do localism will matter. Major worldwide Nodes guess: Topagraphically, I’d go with 8 in the very long term (beyond even 2040). BitCoin really only needs 3 to work, 1 for a tie breaker and have higher robustness. 8 is derived from splitting the globe into planar sections using each plane (xy, xz, yz). 4 sections in the Northern Hemisphere, and 4 in the South. It won’t work out that way for a long time, but eventually people will fill up both the continents AND the seas (Thiel’s floating cities are much easier concept than living on Mars Elon!) and everyone will need BSV Nodes nearby! Technical dive into Proof-of-Work: In the beginning, bitcoind v0.1.0 did it fairly simply. Once a second or so, it would take an in memory map of all the transactions it had received, check any new ones to ensure they met the minimum fee requirement and if so, add them to the block template which was basically an ordered list of valid transactions. It would then calculate a merkle tree from that set of transactions and build a block header to start doing proof of work upon. — Steve “Shadders”, nChain, on BSV “Dynastic” release (1.0.7) I’ve deliberately, in this article so far, sold short the proof-of-work pertaining to hash-puzzles. It was deliberate for emphasis, to command your attention to TRANSACTION VERIFICATION as the true moneymaking mechanism for BitCoin Nodes. Here is where I clarify. In reality, the “leading zeroes” hash puzzle is in fact a “Proof of Transaction Processing” directly. How? It has to do with the INPUT to hashing, which includes the block of transactions itself. If you think about this logically, a transaction-processing Node in 2040 won’t be able to make any revenues without including MANY transactions into a block, thus the winning hash-puzzle solution will NEED to include all the transactions (a TeraByte for instance). So what the hash-solution POW represents is the fact that the Node has done all the work of processing the transactions in the network! THE LEADING-ZERO HASH PUZZLE SOLUTION IS THE PROOF — NOT THE WORK! A hash-puzzle solution is the PROOF of BitCoin transaction processing — not the WORK Traditional “proof-of-work” of BitCoin, the hashing-puzzle which is a hash with the proper amount of leading zeroes (based on difficulty-factor), is actually just the cherry on top of a sundae — it’s a proof, not the work. The cherry acts as PROOF the ice cream parlor proprietor has built an ice cream sundae, because she doesn’t get to drop it on top until AFTER the sundae is built. The WORK is building the sundae itself. The cherry is supposed to be RELATIVELY easy if all the previous work has been done. If a competing Node doesn’t include all the fee-generating transactions in the ingredient-precondition for this “cherry”, it simply won’t make any money! Cherry = Leading-zeroes hash-puzzle solution which cannot generate revenue unless it includes the verified transaction list inside! Sundae = A block of verified-good BitCoin transactions — someday a TeraByte or more in size. “Cloudia” the ice cream parlor proprietor = A BitCoin Transaction Processing Node, like TAALor Mempool or Unbounded Enterprise or MatterCloud. This understanding of the ORDER of operations for “petahashes” of energy spent on a seemingly mindless task, allows us to see another small energy-saving contribution only BSV will have due to it’s ability to scale: The nonce which is part of BitCoin hash-puzzle ingredients, is just a goofy random input which allows a Node to make another guess and hope the resulting hash meets the leading-zeroes determined by the difficulty factor. Here’s the thing tho, it only needs to be changed to something different if the block of transactions hasn’t changed since the last guess! Changing it to something different requires a little bit of work by the Node, so it uses electricity, especially if you consider how many puzzle guesses a Node makes per second. If a new transaction comes into the Node for verifying, that alone will change the output hash, so no new nonce will need to be calculated. In other words, every new transaction allows a Node to recieve what the nonce provides FOR FREE! Let’s compare btc to BSV one more time using this knowledge: PetaHash defined courtesy of: https://sym.re/N8mgTum bitCORN/”btc” is limited by it’s dictatorial developers to 1 MB “small block” size, and since standard monetary transactions are, say, 200–500 Bytes, this translates directly to about 5–8 transactions per second. For every 100 quadrillion nonces potentially needed, btc could skip generating a new nonce about (8 txns/sec * 60 sec/min * 10 min/block) 4,800 times, whereas… BSV can skip the change-of-nonce 4 billion times when processing 1 TeraByte blocks! The more transactions per second BitCoin SV can scale, the more incremental energy it will save! btc’s “POW” is like amassing the tallest structure in the world using nothing but spire — a worthless tower. BSV is like building the tallest building in the world where every bit of verticality contributes to USEFUL office or living space for people. Is it any surprise that even the large block transactions of BSV contribute its success, while btc’s proof of work is actually a huge negative drain on human ingenuity? CONCLUSION: This quote just seems silly now, doesn’t it? “Bitcoin mining’s energy use is reportedly growing at a rate of 25% per month. At that rate of growth, it will consume as much electricity as the US in 2019, and by 2020 bitcoin mining could be consuming the same amount of electricity every year as is currently used by the entire world.” The BitCoin Network of Nodes isn’t going to be any “Electric Avenue”, sorry Eddie. BitCoin’s protocol was set from the beginning to make its computation network the most energy efficient in the world at scale. The protocol uses premeditated code thoughtfulness, but most of all profit margin incentive to guarantee it’s light energy footprint as it grows bigger and more useful to humanity. Craig Wright’s vision on power consumption was likely considered decades ago and we will all benefit in the decades to come. $$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$$ If you found this article helpful on your digital-asset outlook, BitCoin understanding or surfing thoughts, please “in lieu of flowers” go look up a word that’s NOT defined in SLictionary (“Lucky you!”) and apply a nice extra Word Bounty amount (see the “blank check” money button on the Lucky you page) to the word. www.SLictionary.com OR Define a word, to best LEARN a word here: (and make money in the future!) https://www.slictionary.com/createword/@/@/@ BAEmail.me at [email protected] &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&& REFERENCES: [0] BitCorn Courtesy of BitCoin SV app www.SLictionary.com [1] 3 Wrights Don’t Make a Wrong https://equitydiamonds.medium.com/three-wrights-dont-make-a-wrong-a37ea19ed636 [1] Work The definition of “Work” courtesy of www.SLictionary.com [2] HODL [2] “The Power of Love” (official video) by Huey Lewis & the News
https://medium.com/@equitydiamonds/bitcoin-will-massively-reduce-the-computational-grids-electric-power-consumption-b88b0f52d33a
['John Pitts']
2021-03-23 18:07:02.611000+00:00
['Bsv', 'Energy', 'Ethereum', 'Information Technology', 'Bitcoin']
1,092
Third Law of the Interface: Interfaces form an ecosystem
Interfaces live in an ecosystem and there is a fertile and conflicting exchange among them. When the engineers who created the first computers needed a device to program them, they simply adapted what they already had: the typewriter QWERTY keyboard. And when in the 1960s the computer needed a real-time output device, they had no doubts: the television screen was waiting for them. Like the synapses of a neuron or the valences of a chemical element, interfaces have the possibility of linking with other interfaces. Interfaces, as Claude Lévi-Strauss (1964) said about myths, engage in a dialogue and “think each other”. The dialogue between interfaces does not discriminate against any type of device or human activity. What today is on the screen, yesterday was in the real world, and what will appear tomorrow in a videogame, will later be found on the Web. Interfaces form a network that looks like an expansive hypertext in perpetual transformation that carries out operations of movement, translation, transduction and metamorphosis. The evolution of interfaces depends on the correlations that they establish with other interfaces. If the interface does not engage in a dialogue with other interfaces, it does not evolve and it runs the risk of being extinguished (Fourth Law). The impossible interface Sometimes the interface does not find good interlocutors for dialogue. The printing press, invented in China a millennium before Johannes Gutenberg, could not become consolidated in that society because it was almost impossible to dialogue with a system of ideographic writing in which each sign corresponds to a concept. As Marshall McLuhan (1962) explained, the interface of the Chinese press lacked an interlocutor: the Latin alphabet. The Gutenberg machine, on the other hand, integrated into one interface the wine press, the Latin alphabet, paper, binding systems and the techniques of fusing and molding lead. Five hundred years after Gutenberg something similar happened with graphic interfaces. Several companies attempted to market a personal computer with a user- friendly interface (Apple Lisa in 1980, the Xerox Star in 1981), but they failed. Finally, in the prophetic year of 1984, the miracle occurred: the Macintosh, the machine for the rest of us, conquered the public. Why did the Mac succeed where the Apple Lisa and the Xerox Star had failed? Because it established a dialogue between its graphic operating system, the printer laser of Hewlett-Packard and the PostScript language of Adobe. The union of these three technologies revolutionized the way the world understood computing, created new professional fields such as Desktop Publishing (DTP) and generated the conditions for the personal computer revolution in the 1980s (Lévy, 1992). Perfect interfaces The situation happened again at the beginning of the 21st century. As Steven Levy explains in The Perfect Thing (2006), the appearance of the iTunes software, the progressive reduction in the size of hard disks, the lower price of memories and the development of the Firewire interface converged into the coolest product of the new decade: the iPod. The iPod is an interface that integrates different hardware and software elements – a 1.8-inch hard drive, the Firewire connection, the MP3 format for audio compression – with the former Macintosh application for playing and managing music: iTunes. As in 1984 with the Macintosh, the interconnection of actors determined the success of the perfect thing. Just one year after Levy’s book was published it was already old. On 29th June 2007 Apple introduced a new perfect thing with an even more extended network of actors: the iPhone. This description of high-technology devices that converge into a single interface should not eclipse the human actors that participate in them. Designers (the Apple design team, not just Steve Jobs), institutions of any kind (media, markets, Apple Stores, research labs, etc.) and, obviously, consumers, participate and interact in the network built around these almost perfect – new improved models are presented every semester – interfaces. Theoretical networks Social sciences have had an intermittent interest in technological change. Classical thinkers like Adam Smith, David Ricardo or Karl Marx saw mechanization or division of labor as fundamental topics of their economic theories. Nevertheless, from the end of the 19th century to the 1950s the economy was more interested in the equilibrium of variables so that the attention was focused on other fields. The development of a new school of thinking around the Austrian economist Joseph Schumpeter brought the problem of technology, innovation and entrepreneurship into focus again. For many years, researchers believed that the role of inventors was central in the innovation process: that’s why we still talk about James Watt’s steam engine, Thomas Edison’s light bulb, Alexander Bell’s telephone and Steve Jobs’ Macintosh. To every name there is a corresponding artifact, or more than one (Thomas Edison also ‘invented’ the phonograph, and Steve Jobs the iPod, the iPhone and the iPad). This conception is based on the heroic role played by each individual inventor in the creation of a new artifact. Researchers like Nathan Rosenberg (1992), one of the most recognized historians of economy, denounced this ‘heroic theory of invention’ that impregnates our language, patent system and history books. In this context, the Laws of the Interface prefer to establish a dialogue with conceptions and theories like the Social Construction of Technology (SCOT) (i.e. Bijker, Hughes and Pinch, 1987; Bijker and Law, 1992; Bijker, 1997), the Actor- Network Theory (ANT)(Callon, 1987; Law and Hassar, 1999; Latour, 2005), media ecology (McLuhan, 1962, 2003; McLuhan & McLuhan, 1992; Scolari, 2012, 2015; Strate, 2017), media archaeology (Huhtamo & Parikka, 2011; Parikka, 2012) and media evolution (Scolari, 2015, 2019). The contributions of Arthur (2009), Basalla (1988), Levinson (1997), Logan (2007), Frenken (2006), Manovich (2013) and Ziman (2000) have also been integrated into this interdisciplinary and polyphonic conversation. The Laws of the Interface, in a few words, proposes an eco-evolutionary approach to socio-technological change based on the contributions of all of these authors and disciplines. The content of one interface is always another interface What happens when we deconstruct an interface? The windmill was one of the most important inventions of the Middle Ages. If we deconstruct a windmill, what do we find? A combination of the water mill and ship sails, two technologies invented in Antiquity. If we dismantle the water mill we will find a wheel, an axis and many other technological actors that interact with them. When we open an interface we always find more interfaces. This fractal dimension of interfaces could take the form of a new law or at least a corollary: the content of one interface is always another interface.
https://uxdesign.cc/third-law-of-the-interface-interfaces-form-an-ecosystem-e6293a108089
['Carlos A. Scolari']
2019-10-09 23:49:23.079000+00:00
['UI', 'Usability', 'Design', 'Interfaces', 'Technology']
1,093
A New Online Platform, HomeWerk, That Strengthens Bonds Between Colleagues Has Launched.
HomeWerk, a new online platform that can help strengthen the bond between colleagues working from home has been launched. The platform brings them together through collaborative activities, discussions, and competitions. The platform features integrated videos and text chats as well as automated calendar invites, email reminders, and easy problem solving via live chat. Users can earn points, badges, and streaks for different actions taken within the platform and there are weekly, monthly and all-time leaderboards to see who is coming out on top. Teams are given a varied suite of digital activities, covering a wide range of themes and interests to ensure no one is left out, and HomeWerk is also partnering with third parties to provide premium experiences which will also lead to more points for the leaderboard. Each user has their own section where they can track their own progress — as well as that of their colleagues, and they will also undertake a ‘Who Said That?’ quiz, with their answers displayed to give a fresh talking point for anyone who wants to connect. As an added bonus, HomeWerk will also record birthdays, work anniversaries, and other milestones, so members can leave messages for their colleagues and the platform will collate the content into a premium, shareable, moment. HomeWerk is already been used by the likes of Facebook, O2, Sony Music, and John Lewis. Individuals can also use its free four-week trial version.
https://medium.com/@digitaltimes-2020/a-new-online-platform-homewerk-that-strengthens-bonds-between-colleagues-has-launched-8d4ede5cab8c
['Digital Times Africa']
2021-02-23 11:37:11.511000+00:00
['Apps', 'Technology', 'Technology News', 'Facebook']
1,094
The Unsung Heroes of Modern Software Development
Open Source Foundation Leaders I’ll highlight six open source foundations that are key to many important projects. For each foundation I’ll give a brief bio, provide the number of projects being supported as of early 2019, and highlight some well-known projects. Note that these groups fall under various IRS classifications for charitable and trade organizations — not all are technically foundations. Apache Software Foundation The Apache Software Foundation is 20 years old and is one of the largest foundations. As of early 2019 it has over 350 open source initiatives. The ASF provides an established framework for intellectual property and financial contributions that simultaneously limits potential legal exposure for our project committers. Through the ASF’s meritocratic process known as “The Apache Way,” more than 730 individual Members and 7,000 Committers successfully collaborate to develop freely available enterprise-grade software, benefiting millions of users worldwide: thousands of software solutions are distributed under the Apache License; and the community actively participates in ASF mailing lists, mentoring initiatives, and ApacheCon, the Foundation’s official user conference, trainings, and expo. www.apache.org Many Apache projects are Java heavy. Popular projects include: Apache HTTP Server, Hadoop, Tomcat, and Arrow. Linux Foundation The Linux Foundation is the home of the Linux operating system and many related projects. Some of its other 100+ projects include NodeJS and RethinkDB. The Linux Foundation supports the creation of sustainable open source ecosystems by providing financial and intellectual resources, infrastructure, services, events, and training. Working together, The Linux Foundation and its projects form the most ambitious and successful investment in the creation of shared technology. www.linuxfoundation.org The Linux Foundation was founded in 2000 as a merger of two other groups. It currently has over 1,000 members, including all the usual big name technology companies. All hosted projects get governance structure and back-end resources. Some projects also get funding. The Linux Foundation also provides training and conferences. Free Software Foundation Launched in 1983, the Free Software Foundation maintains the projects that make up the GNU Linux ecosystem. Other popular projects include Bash, Emacs, Gawk, Make, and R. The Free Software Foundation (FSF) is a nonprofit with a worldwide mission to promote computer user freedom. We defend the rights of all software users. https://www.fsf.org/ The Free Software Foundation has over 5,000 members and about 400 OSS projects. Software Freedom Conservancy The Software Freedom Conservancy was founded in 2006. It has over 45 projects including such popular ones such as Busybox, Git, Homebrew, Inkscape, phpMYAdmin, PyPy, and Selenium. Software Freedom Conservancy is a not-for-profit charity that helps promote, improve, develop, and defend Free, Libre, and Open Source Software (FLOSS) projects. Conservancy provides a non-profit home and infrastructure for FLOSS projects. This allows FLOSS developers to focus on what they do best — writing and improving FLOSS for the general public — while Conservancy takes care of the projects’ needs that do not relate directly to software development and documentation. https://sfconservancy.org/ The Software Freedom Conservancy has over 500 sponsors, including Google and some other big names. Software in the Public Interest Software in the Public Interest was founded 1997. Its 39 projects include haskell, PostgrSQL, Jenkins, Arch Linux, and Debian. Software in the Public Interest is a non-profit organization which was founded to help organizations develop and distribute open hardware and software. Our mission is to help genuine, substantial, and significant free and open source software projects by handling their non-technical administrative tasks so that they aren’t required to operate their own legal entity. https://www.spi-inc.org/ Cloud Native Computing Foundation (CNCF) The Cloud Native Computing Foundation is the new kid on the block. Founded in 2015, it supports open source projects around Kubernetes containerized cloud microservices. CNCF is an open source software foundation dedicated to making cloud native computing universal and sustainable. Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization. Cloud native technologies enable software developers to build great products faster. www.cncf.io CNCF members include a who’s who of tech: AWS, AlibabCloud, Dell, Intel, Oracle, Microsoft Azure, IBM Cloud, and Google Cloud. As of early 2019, 4 projects have graduated and 16 are incubating. Popular associated projects include Kubernetes, which has graduated from CNCF. Interestingly, CNCF is supported by the Linux Foundation. NumFOCUS NumFOCUS is the home of many popular data science open source projects. It was founded in 2012 and its 25 popular projects include NumPy, Matplotlib, Pandas, Jupyter, Julia, and Bokeh. NumFOCUS also promote many other open source projects as affiliated projects. NumFOCUS offers many programs in support of our mission to promote sustainable high-level programming languages, open code development, and reproducible scientific research. https://numfocus.org/ NumFOCUS holds PyData conferences throughout the world. Disclosure: I’ve volunteered at PyData DC and had great time. I highly recommend volunteering! 😃
https://towardsdatascience.com/the-unsung-heroes-of-modern-software-development-561fc4cb6850
['Jeff Hale']
2019-12-13 16:42:53.051000+00:00
['Technology', 'Software Development', 'Open Source', 'Data Science', 'Programm']
1,095
Website: https://scantek.com/
Website: https://scantek.com/ Phone: 1300 552 106 Business hours: 24 Hrs Email: [email protected] Business Services: Information Technology, Business Solutions, Identity Verification, ID Scanners Business Description: Scantek provides software solutions to verify identities in real-time, anywhere. Founded in 2012, headquartered in Perth, Western Australia, Scantek is leading the way in digital verification of identity. Scantek specialises in instant identity verification built on facial recognition technologies and identity documentation and card verification. Scantek solutions serve as a SaaS model and are designed to combat fraud, increase security and improve user and customer experience So if you are looking for a technology partner that is committed to improving your operational efficiencies, adding a layer of security to your customer’s information and improving their overall experience with your business, then Scantek is the perfect solution for you; give us a call today!
https://medium.com/@scantekaustralia/website-https-scantek-com-e8358b36167e
[]
2020-12-15 12:57:54.543000+00:00
['Id Scanner', 'Information Technology', 'Identity Verification', 'Business Solutions']
1,096
Starting Digital Transformation on the Right Path: 5 Best Practices
Digital transformation is an imperative because IT is no longer merely the keeper of infrastructure; it’s a core enabler of new strategies and a core catalyst for growth. Legacy architectures and traditional systems integration techniques can’t meet the pace of modern business, so true transformation doesn’t come from mobile apps, e-commerce or websites — it comes from adopting architectures and operational models that support agile development, enabling companies to combine and recombine software to create new customer experiences and business opportunities, and to constantly iterate to satisfy changing customer preferences. Consider Ticketmaster’s* transformation, which my colleague Brian Kirschner recently described in an article for CIO.com: Ticketmaster has been online for a long time — but until recently, its interactions with customers were funneled through relatively limited channels. Going online gave Ticketmaster scale beyond physical ticket booths, but its business still essentially operated according to old models of supply and demand. The company was bearing the cost of building channels — whether those were booths, websites or apps — and the cost of marketing and promotion to drive customers to them. That’s no longer the case. Ticketmaster established an API platform to make its core business services, such as ticket purchasing and event discovery, more easily available for partners — which now include Facebook, Broadway.com, Costco and Fox Sports. By converting its business into pieces of software that developers — including those beyond the walls of the firm — can easily build into apps and services, Ticketmaster benefits from demand generated by third parties and transactions fulfilled in channels it didn’t have to build. Apigee works with hundreds of enterprises on these sorts of transformations, and we’ve observed that though no two companies follow the same journey, businesses fall into patterns that make it possible to prioritize action steps and leverage best practices. In this article, we’ll focus on best practices for companies at the beginning of their journeys, as they move from digital projects to full platform and continuous delivery capabilities. 1. Build a case for the business value of APIs The digital economy has moved beyond smartphone apps and e-commerce. Customers expect seamless experiences — i.e., that interactions begun in one place, whether an app or a website, will be reflected in other places, such as physical stores or other apps. Sophisticated businesses no longer focus exclusively on producing a finite supply of products and selling them through a finite range of channels — they also use technology and platform strategies to mediate the exchange of value wherever it can be consumed. AccuWeather* doesn’t just provide weather data on a first-party website, for example — it makes its weather data available via APIs so developers can build it into their own apps. Likewise, Walgreens* doesn’t just offer services such as photo printing and prescription fulfillment through stores and other first-party channels — it makes these services available as APIs. There are literally thousands of examples like this, in which companies use infinitely scalable digital assets for strategic leverage. Action steps: Align senior business and technology leaders around an API-first platform vision: Modern businesses are agile, using APIs to combine and recombine their software in order to bring new capabilities to market, expand ecosystem participation, capitalize on short-lived opportunities, and quickly adapt to changing customer needs and market conditions. Emphasize that APIs are not technical minutiae or middleware; they are products that empower developers to leverage core systems and functions to build apps and digital experiences. 2. Fund API projects as a step toward platform strategies An effective API platform typically requires a funding model that gives teams the flexibility to iterate rapidly without running into bureaucratic blockers and stifling governance. This sort of funding may require top-down support, and building the requisite executive consensus can be a challenge. Put another way, if your funding model, development cadence, and governance processes are designed for a waterfall world, your API program will likely struggle to gain momentum. Project-to-project funding is generally not tenable in the long run, but as a starting place, single projects can be a good way to generate success and build the credibility needed to align executives around the API platform’s growth. These early projects should focus on building APIs as products — i.e., designed for developer consumption, not just to expose systems. Even if the initial scope of these APIs is modest, they can become references to driver wider platform adoption. Action Steps: Start now by explicitly funding the API components of a significant in-flight or imminent project. Good candidates include partner integrations or web, mobile or IoT functionality projects. Such projects initially involve exposing systems — but to demonstrate a path to broader digital business, your teams should think bigger. If the team applies user-focused, outside-in strategies and designs and manages its APIs as products, the APIs should become a foundation for shifting the rest of the business to platform strategies. 3. Unite business and technical talent Top businesses generally operate from the outside-in, using a customer focus — rather than IT roadmaps — to define strategies. To achieve this dynamic, business and technology workers should develop digital strategies collaboratively. IT isn’t just responsible for maintaining infrastructure, in other words — it’s the core enabler of new business models. APIs shouldn’t be built in silos, with business teams dictating requirements and simply handing them off to IT. Action Steps: Have technical and business talent jointly define desired customer experiences, then move to required product features. 4. Challenge Existing Business Models Digital ecosystem participation is an increasingly popular digital transformation accelerant. Enterprises can participate in ecosystems by packaging business systems and services into API products that provide value to partners and external developers. To take advantage of these opportunities, executives must remain open to new business models that may emerge as ecosystem participants in other industries begin to leverage the company’s APIs. Successful digital businesses can benefit from network effects as users and partners gained in one part of an ecosystem translate to new users and partners elsewhere. Apigee customers have pursued API-first ecosystem models to enter adjacent markets, create new customer interaction models,and rapidly grow their brand reach and partner ecosystems. Action Steps: Design APIs that are easy for partners to consume. Manage the APIs as products that developers can leverage at scale to extend your brand. Set clear permissions for faster, more secure onboarding, and encourage adoption with self-service features, including documentation, sample code, and testing tools. 5. Measure How APIs Are Consumed API consumption metrics help enterprises align their workforces around digital best practices, understand changing user behavior, and drive business results. Traditional enterprise ROI metrics assume certain conditions — e.g., long payback periods and predictable patterns around transaction volume and pricing strength. Modern digital business operates under different conditions, such as shorter opportunity windows and more fragmented customer segments, that require different metrics. API consumption metrics, such as which APIs product the highest-value transactions per call or which APIs generate the highest partner engagement, can be strong signals of emerging business opportunities, for example. Arbitrary metrics, such as the number of APIs produced, don’t provide this kind of insight. Action Steps: Use API consumption metrics to understand how digital initiatives lead to business impacts. Ticketmaster, AccuWeather, and Walgreens are Apigee customers. [Looking for more insights about digital transformation? Check out Apigee’s resource hub here.]
https://medium.com/apis-and-digital-transformation/starting-digital-transformation-on-the-right-path-5-best-practices-322f95a517e3
['Michael Endler']
2018-10-16 22:54:45.373000+00:00
['Api Management', 'API', 'Digital Transformation', 'Enterprise Technology', 'Software Development']
1,097
Coworking Case Studies: A Breeding Ground for Tech Giants
Coworking Case Studies: A Breeding Ground for Tech Giants Coworking spaces offer smaller business flexibility, the opportunity to collaborate, cheaper overheads and access to top locations. They are a vital option in today’s economy for tech startups developing their products. But are companies actually using them to become successful household names? The short answer is a resounding yes. Given the turbulent nature of life as a tech startup, many of the most successful products launched in recent years were born in coworking spaces such as Primalbase, WeWork, Regus or Spaces. From ridesharing to blockchain to social media, coworking spaces have incubated some of today’s biggest names — proving they are more than a last resort for cash-strapped new companies. We took a look at the most notorious examples. Uber It’s almost impossible to talk about the success of co-working spaces without mentioning one of the most infamous companies in the world: Uber. The ‘ride sharing’ giant has made headlines in their eventful nine-year history, both good and bad, and have been one of the key players in ushering in the controversial gig economy. Although it’s almost impossible to believe now given its size, Uber began life in a coworking space. When the team consisted of just eight people, Uber operated out of a coworking space in San Francisco, California. The space gave them the flexibility and cheaper overheads to keep going — before the likes of Google Ventures and Fidelity Ventures invested heavily, enabling the company to expand into over 250 cities. Hootsuite Software as a Service (SaaS) is a crowded space, with only the best ideas making it from development to market, let alone mass adoption. This takes time, and piling valuable resources into a permanent, private office space is too risky for companies starting out in a competitive space. Hootsuite is one SaaS provider that became a resounding success. Their software allows users to manage different social media accounts in one place, a particularly useful tool for marketers. Now valued at over $1 billion, Hootsuite started life in a coworking space in Vancouver, Canada, where it was able to hone its product, make connections and secure the funding necessary to take it to the next level. Waves Waves is an open-source blockchain platform building known for being ‘the world’s fastest blockchain’. In 2017, Waves partnered with Deloitte to launch the development of a legal framework for wider adoption of blockchain technologies — proof that the platform is thinking ahead to real-world adoption alongside developing the core technology itself. Not only did Waves begin life in Primalbase’s coworking spaces, it uses them today. Waves was attracted to coworking for the same reason so many others have been in the past: flexibility and affordability. These are both due to Primalbase’s unique tokenised business model and leasing system. Waves CEO Sasha Ivanov explained that the community spirit fostered through Primalbase’s unique business model is also a major attraction. “It facilitates collaboration a lot because you exchange ideas and meet people and share contacts,” he said. “Everyone is really enthusiastic and welcoming because of the startup nature of the different companies, and they’re real tech people so they love what they do.” Instagram The idea behind Instagram was simple — Facebook with only the pictures. It was an idea that led to a whopping 1 million users signing up within two months of launching, and it has gone from strength to strength ever since. Two years later after it was founded, with only 13 employees at the company, Instagram was acquired by Facebook. Today, it may be housed in Facebook’s mega-complex in Silicon Valley, but it began life in a coworking space. The coworking space reduced the expensive risk of moving into a permanent office too early in the company’s development and can be credited with inflicting a million inane pictures into the world. And some nice ones too. d+b !ntersection Design company d+b !ntersection is an interesting case because its founder, Alexandra Rodriguez, initially ran the business from her own Barcelona flat. Naturally, a lot of companies start this way, unable to afford office space until the project gets off of the ground, or simply without the workforce to necessitate a shared space. For Rodriguez, though, the decision to move into a coworking space was made because she believed they are naturally inspiring spaces. She felt that it would give her a place to meet like-minded creatives and share project ideas. The influence that being around other thriving companies can have on a business shouldn’t be underestimated, with opportunities for collaboration and inspiration abundant, and d+b !ntersection have gone from strength to strength since making the decision to move into a shared space. Are you interested in finding out more about being in a coworking site can do for you? Send us an email at [email protected] to ask any questions you might have or to set up a tour so you can see for yourself!
https://medium.com/primalbase/coworking-spaces-offer-smaller-business-flexibility-and-collaboration-at-a-price-they-can-afford-a93b3d8fc97
['Charlie Sammonds']
2018-11-15 17:00:38.650000+00:00
['Coworking', 'Uber', 'Startup', 'Technology']
1,098
Get iPhone 12 Standard and iPhone 12 Pro at Cheaper Prices on December Holidays
If you are looking for a perfect Christmas or New Year gift to present your friend or beloved one, don’t miss this amazing opportunity. Verizon has a buy-one-get-one-free promotion with its iPhone 12 deals. So, you can now buy a device and get an additional device for absolutely free when you purchase the first with a new unlimited plan. The company also offers another great option because you can also upgrade and trade-in your old device for a $440 discount over 24 months. Point to be noted that you can score an additional $250 off at Verizon if you’re switching over this week. It is a holdout Cyber Week deal that can be stacked on top of the various offers below. Apple has something close to a monopoly on unlocked iPhone 12 deals right now with its incredibly generous trade-in program. Source: https://www.techhound.org/gadgets/get-iphone-12-standard-and-iphone-12-pro-at-cheaper-prices-on-december-holidays/
https://medium.com/@rahulsharmauk/get-iphone-12-standard-and-iphone-12-pro-at-cheaper-prices-on-december-holidays-ebdb6130f962
['Rahul Sharma']
2020-12-22 10:43:46.087000+00:00
['Daily Update', 'Iphone 12 Pro', 'Technews', 'Technology', 'World News']
1,099
Customer Journey Map and its role in Marketing
Customer experience can be also described as a battleground. A good experience will make customers return to a store a couple of times, a bad one however will make him leave and never return. Customer journey map can be a strong ally in this battle. According to L. Abbott the true desire of customers is not to just buy a product, but to buy an experience. In that manner, we understand that decision making and customer experience should be based on human behaviour instead of focusing only on selling a product. The design and the good management of a customer experience from the brand, are two necessary factors in order to survive in the marketing field. Companies should design an experience for the customer, in order to create an emotional connection between brand and company. Customer journey mapping is an essential tool for this procedure, as it combines creativity, brand awareness, and a huge influence on customers. In addition, the map can create the story of a client. A map can help the client meet the brand, get involved with it, and finally create a long term relationship with it. As companies create maps that need more and more data, experiences and models, they are in need for even more tools that can help function their ideas. Artificial intelligence can automate hundreds of decisions involving customer’s journey, which allows them to create constantly new content and campaigns. Customer mapping journey’s goal is to create a representation of experiences that a company has with its clients, brand and products. Apart from that, an important function that mapping does, is to show the key points that a company needs to change, in order to see the client’s perspective. That way, companies will create experiences that clients actually want, instead of creating something that they think that clients would like. The unique ability that customer journey mapping has, is that it can collect every separate consumer’s goals, desires, mentality and preferences. This is achieved by collecting his touch points, and main interactions. With this method, the mapping system will be able to recognize customers’ future purchases through patterns, and it will fulfill his needs with speed and precision. By using data, Machine Learning and Natural Language Processing techniques, the map will recommend products on its own, even before the user searches it. In order to create a customer journey map, there are some factors that should be considered. Personas for example, which are archetypes that companies develop, so they can find more information about the target audience. Each persona has different personality, habits, needs, attitude but also pain points. Pain points have to be solved through journey maps, as these points represent people’s goals, emotions, beliefs and expectations. Apart from that, there should be given attention on the timeline. As it is widely known, timing is everything and this does not differ on customer journey mapping. In order to keep a good timeline, brands should take note on the points that determine the acts of customers and their interaction with the company, and their social media channels, or contact forms. In order to have the best possible customer journey map, algorithms need a huge volume of data. For that reason, maps filter out some information according to customers’ location and similar behaviour to other people, demographics, previous purchases, and many more. Maps are enhancing personalization’s value, and create a great experience that will help to improve a brand’s image to clients. The IKEA Example One of the most interesting customer journey maps that uses its touchpoints in digital form, in advertisements and in physical stores, is none other than IKEA. If you’ve ever been to an IKEA you will have probably seen that everything is following a labyrinth pattern. This is not a random pattern, but it is laid in that way in order to engage customers instead of just securing their sales. As every room and product is set in a particular way, customer knows that he is about to do a journey, experiment, try some of the furniture or even purchase something that he didn’t plan to at first. This map promotes every single product available to the customer but doesn’t pressure them into buying, since they can just follow the arrows and head to the cashier. As clients are usually using more than one device, this results to customer journey mapping being a continuous process, until the ideal product is created. Dynamic Pricing, interactive experiences, but also the Intelligent Personal Assistants, are helping to create the best journey mapping experience. IKEA’s example is a perfect example of how to turn your digital customer journey map into a physical one, and be successful. What you need to do is eventually the same, create a story for the customer, and fulfill their needs. To make your digital experience a physical one, it is important to have data on customer behaviour by your touch points and main interactions. You need to know who are your customers, what do they expect from your brand, and what will make them satisfied. Your map has to be tailored to your customer personas, be able to tell a story, and fulfill their needs.
https://medium.com/the-unpublished/customer-journey-map-and-its-role-in-marketing-571d274090a
['Thanasis Papadopoulos']
2020-11-10 16:27:29.681000+00:00
['Maps', 'Marketing', 'Customer Journey', 'Artificial Intelligence', 'Technology']