Unnamed: 0
int64
0
3k
title
stringlengths
4
200
text
stringlengths
21
100k
url
stringlengths
45
535
authors
stringlengths
2
56
timestamp
stringlengths
19
32
tags
stringlengths
14
131
400
What Happens When You Actually DM a ‘DM to Collab’ Instagram Scammer
Photo illustration; source: Tim Robberts/Getty Images Irecently started posting regularly on my Instagram account, and I have about 1,200 followers. But I wouldn’t call myself an influencer, unless I’m influencing people to write more and drink coffee, which is pretty much all I post about. My main goal, as anyone who checks out my account would be able to see in an instant, is to have a nice place to post selfies and share a bit more of my real life. It’s not to sell anything — I don’t have a single sponsored or branded post on my feed. Nevertheless, brand after brand after brand leaves eerily similar comments on my feed, asking that I DM them to “collab.” Screenshot from my IG account The first time I got this comment, honestly, I was a little flattered. I checked out the account, realized it sold random stuff, and figured they’d made a mistake. https://shelbycounty.iowa.gov/rax/video-ste-v-buf-liv-us-nfl301.html https://shelbycounty.iowa.gov/rax/video-ste-v-buf-liv-us-nfl302.html https://shelbycounty.iowa.gov/rax/video-ste-v-buf-liv-us-nfl303.html https://shelbycounty.iowa.gov/rax/video-bills-v-steelers-liv-nfl-4k001.html https://shelbycounty.iowa.gov/rax/video-bills-v-steelers-liv-nfl-4k002.html https://shelbycounty.iowa.gov/rax/video-steelers-v-bills-liv-cbs-tv012.html https://shelbycounty.iowa.gov/rax/video-steelers-v-bills-liv-cbs-tv013.html https://shelbycounty.iowa.gov/rax/video-bills-v-steelers-4k-nfl-tv03.html https://shelbycounty.iowa.gov/rax/video-ste-bil-4k-tv001.html https://shelbycounty.iowa.gov/rax/video-ste-bil-4k-tv002.html https://shelbycounty.iowa.gov/rax/video-ste-bil-4k-tv003.html https://shelbycounty.iowa.gov/rax/video-ste-bil-4k-tv004.html https://shelbycounty.iowa.gov/rax/video-ste-v-bil-4k-tv005.html https://shelbycounty.iowa.gov/rax/steelers-v-nfl-20-21.html https://shelbycounty.iowa.gov/rax/videos-Steelers-v-Bills-up03.html https://shelbycounty.iowa.gov/rax/videos-Steelers-v-Bills-up04.html https://shelbycounty.iowa.gov/rax/videos-Steelers-v-Bills-up05.html https://shelbycounty.iowa.gov/rax/videos-Steelers-v-Bills-up06.html https://shelbycounty.iowa.gov/can/bma/Video-Pis-v-cks-liv-us-nba01.html https://shelbycounty.iowa.gov/can/bma/Video-Pis-v-cks-liv-us-nba02.html https://shelbycounty.iowa.gov/can/bma/Video-Pis-v-cks-liv-us-nba03.html https://shelbycounty.iowa.gov/can/bma/Video-Pis-v-cks-liv-us-nba04.html https://shelbycounty.iowa.gov/can/bma/Video-Pis-v-cks-liv-us-nba05.html https://shelbycounty.iowa.gov/can/bma/Video-Pis-v-cks-liv-us-nba06.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bils-Lv-nfl-001.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bils-Lv-nfl-002.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bils-Lv-nfl-003.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bils-Lv-nfl-004.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bils-Lv-nfl-005.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bills-fot01.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bills-fot02.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bills-fot03.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bills-fot04.html https://shelbycounty.iowa.gov/vpu/video-Steelers-v-Bills-fot05.html https://shelbycounty.iowa.gov/dfs/Stl-v-Bil-liv-ntv-091.html https://shelbycounty.iowa.gov/dfs/Stl-v-Bil-liv-ntv-092.html https://shelbycounty.iowa.gov/dfs/Stl-v-Bil-liv-ntv-093.html https://shelbycounty.iowa.gov/dfs/Stl-v-Bil-liv-ntv-094.html https://shelbycounty.iowa.gov/dfs/Stl-v-Bil-liv-ntv-095.html https://shelbycounty.iowa.gov/dfs/Bulls-v-Rock-liv-snf-01.html https://shelbycounty.iowa.gov/dfs/Bulls-v-Rock-liv-snf-02.html https://shelbycounty.iowa.gov/dfs/Bulls-v-Rock-liv-snf-03.html https://shelbycounty.iowa.gov/dfs/Bulls-v-Rock-liv-snf-04.html https://shelbycounty.iowa.gov/dfs/Bulls-v-Rock-liv-snf-05.html https://shelbycounty.iowa.gov/dfs/Lak-v-Clp-game-update-01.html https://shelbycounty.iowa.gov/dfs/Lak-v-Clp-game-update-02.html https://shelbycounty.iowa.gov/dfs/Lak-v-Clp-game-update-03.html https://shelbycounty.iowa.gov/dfs/Lak-v-Clp-game-update-04.html https://shelbycounty.iowa.gov/dfs/Lak-v-Clp-game-update-05.html https://shelbycounty.iowa.gov/dfs/v-ideo-Kings-liv-nba-tv-01.html https://shelbycounty.iowa.gov/dfs/v-ideo-Kings-liv-nba-tv-02.html https://shelbycounty.iowa.gov/dfs/v-ideo-Kings-liv-nba-tv-03.html https://shelbycounty.iowa.gov/dfs/v-ideo-Kings-liv-nba-tv-04.html https://shelbycounty.iowa.gov/dfs/v-ideo-Kings-liv-nba-tv-05.html The second, third, fourth, and umpteenth time it happened, I was annoyed. Then, I got irritated. And finally, after blocking, banning, and reporting what felt like dozens of these, only to have new iterations continue commenting on my posts, I landed on inquisitive. What did these spam brands actually want? Surely it wasn’t actually to collaborate. Why did they often comment and ask me to DM a second, bigger account? I wasn’t sure. And how on earth were they finding me and my content? What was their strategy? I decided to look further into three brands that I received this mysterious DM from: Shop Valerio, Brute Impact, and Urban Ice. Here’s what I found: These brands are fake from start to finish. They all have upwards of 100,000 followers, which might incentivize people like me to want to work with them. When I first checked out one account that had messaged me a request to collab, called Brute Impact, they claimed to have 223,000 followers, so if I were featured on their page for wearing their clothes, I would have hoped to get a couple hundred followers. But looking at Brute Impact’s posts, each only received between 300 and 600 likes. This is an extremely low engagement rate (0.13%-0.26%). For comparison, I get about 100–120 likes on each of my posts, with 1,200 followers (an 8%-10% engagement rate). This suggests Brute Impact’s followers are fake. Secondly, the accounts that ask me to DM to collab almost exclusively ask me to message not the account leaving the comment, but the bigger account they tag in the comment. To me, that suggests that even if they get reported for spam comments, the main “mother” account, if you will, remains safe. Third, as I DM’d the three spam brands that left comments on my post, they all messaged almost the exact same thing. Screenshot of my DMs after I DM’d these brands to collab This suggests they’re actually run by the same person or group of people. The three accounts’ responses to my questions were all similar, too. When I asked how they found me, they all cited “scouting teams,” and always pressed that I should visit their website to receive my 50% off code. Fourth, all the websites were identical, low-quality, and very spammy. The main purpose of all of them was to gain “affiliates,” not make sales. Even the shoddy website design was alarmingly similar. Many of the blurbs invited readers to learn more, but did not include links to do so. Their content blogs were full of outdated Covid-19 information. Screenshots of Urban Ice, Valerio, and Brute Impact websites Fifth, and most damningly, in my opinion, there had been changes to usernames. Instagram, to “help keep their community authentic,” lets you see the history of an account’s username over time on accounts that reach a lot of people or advertise on Instagram. I checked these on a hunch, because these accounts didn’t feel authentic, and I wasn’t surprised to see that two of the three brands I looked at had changed their usernames. Not small tweaks either, but total shifts. This means they could have purchased accounts that already had large followings. Most legitimate brands have their own handles that they have no reason to change so drastically. Screenshots taken by author of former usernames So how did they find me? Almost immediately, I had realized hashtags were part of the answer. But when I stripped those out and continued receiving comments, I twigged location was being used, too. I checked other photos posted around the same time mine were, also geotagged in Atlanta, and found the same spam brands had left the same types of comments on several of those photos. There were a few exceptions that let me build a picture of how the brands targeted accounts. The brands that commented on my post specifically target small accounts, with only a few hundred or thousand followers. This could be because bigger accounts would likely know the collab offers were nonsense. They assume the smaller accounts will have less experience. The accounts only seem to comment on photos of people (typically women). When I photographed my dog, or when I didn’t include location, or when I included someone else in the pic, I didn’t get comments. Working backward, we can piece together the spam strategy Here’s what these accounts probably do: Create several “flagship” Instagram accounts. Have extremely cheap merchandise sold at very high prices. You’ll notice these accounts mostly sell jewelry and eyewear. Buy thousands of Instagram followers to make the brand look legit and to incentivize people like me to want to have my work posted on its account. Create dozens of smaller, fake accounts that can be used as conduits for messaging and take the fall for being reported as spam. Use location and hashtags to find small accounts and ask them to collab. Give the owners of those accounts a “discount” on overpriced items in exchange for an affiliate code to promote on their account. Repeat as often as possible. Profit comes mostly from smaller influencers buying their goods at “half-price.” These spam accounts, in other words, want to sell us on ourselves This is what makes me saddest about these accounts. They’re trying their darndest to prey on people by letting them believe, for a minute, that they’ve made it big. Honestly, the first time it happened, I really felt like I’d done it. I felt like I posted great content, and that a brand just happened across my feed, loved my work, and genuinely thought I’d be a great fit. It was so embarrassing to realize how much I’d been fooled. Thank goodness I didn’t buy anything. Now, I’ve been contacted to be sponsored and create promotional material a few times (not on Instagram) and I know what a real offer of collaboration looks like: They contact me, they don’t ask me to contact them. They’re clear about why they’re interested in me — the value they think I can provide — rather than saying they discovered me through vague “scouting teams.” They *never* ask me to pay them anything. They provide their half of the collab upfront. Most times, in the initial contact form, they’ll make it clear they can pay me. Shop Valerio, Brute Impact, and Urban Ice are scams, just like the many accounts like them. They prey on people by selling them a fake dream of having made it. They’re trying to fool people. You shouldn’t feel bad if it happens to you, but if you can, avoid it. A newsletter that puts the week’s most compelling tech stories in context, by OneZero senior writer Will Oremus.
https://medium.com/@basonti69/what-happens-when-you-actually-dm-a-dm-to-collab-instagram-scammer-1488a6d4a87f
[]
2020-12-14 00:25:39.844000+00:00
['Instagram', 'Social', 'Technology', 'Media', 'Influencer']
401
Build once CI, Deploy multiple times for modern web apps using Docker, Kubernetes and Spinnaker.
To illustrate the flow for CI/CD for build once, deploy any times pattern, we can visualize with this diagram. Diagram Showing Typical Flow for Single CI connected to multiple CD Pipelines. In the above diagram, as you can see, we have following components ) A Version Control System i.e Github/Gitlab/Bitbucket. ) A CI Pipeline, i.e Jenkins, Travis CI, CircleCI (pre-configured with Docker). ) A Container Registry, i.e Amazon ECR/ Azure Container Registry. ) One or more Continuous Deployment Pipeline depending on your infrastructure, these pipelines should have an agent which is subscribed to the container registry(Spinnaker is recommended if you are using Kubernetes cluster). To understand this pattern: Given: we have an app called Lucifer which is running now on version 1.1.0 on staging and production . Goal: To deploy a new feature Save Lucifer to staging -> QA This feature -> Deploy to production if QA is successful. Assumptions: Our main branch is master and we want to avoid multiple builds because the unit test + lint take more than 30 minutes, also we want to avoid pushing separate images for staging and production In order to achieve the above goal, we can take a step back and break the problem into 2 parts ) Build the app. ) Run the app. Building an app For our scenario we want to build Lucifer ones, which means when the feature is merged into master branch, we trigger a CI Pipeline which runs tests + linter and code quality checks and on success it builds the docker image tagged as lucifer-v1.2.0 with tag save-lucifer One famous approach i used in past for CI was: Make 2 git branches staging and master , here it would mean running the CI Pipeline 2 times and pushing images lucifer-v1.2.0-stg when code is merged to staging and lucifer-v1.2.0-prod when code is merged to master. While the above approach seems quite intuitive and simple, but the approach with Build ones CI has many advantages over it. ) We always build a feature once even if we have any number of environments such as staging testing or production which saves CI resources and build time for different environments. ) We always have one image per feature in docker registry, hence we save the storage cost and also have it makes the rollback and versioning strategy simpler. ) Also we avoid issues with dependency version management as our docker image will always contain the same build with same version of dependent packages(this is very important as sometimes some packages introduce breaking changes unknowingly which leads to breaking changes if we have separate builds) ) If we build different images per environments then there is no guarantee that images are “similar enough” to verify that they behave in the same manner. It also opens a lot of possibilities for abuse, where developers/operators are sneaking in extra debugging tools in the non-production images creating an even bigger rift between images for different environments. Moving on, after building lucifer-v1.2.0 , we push it to the docker registry such as Amazon ECR/ Azure Container Registry, where it resides alongside lucifer-v1.1.0 The main motive to do this step is to decouple the CI and CD flow and also to maintain versions of our app in case of rollbacks. Running/Deploying the app The next step of our task is to deploy our new feature lucifer-v1.2.0 . Usually this step should be decoupled from the CI Step and one way to achieve this is to use an agent which is subscribed/polling the Container Registry, if you use Kubernetes then Spinnaker is quite useful. Basically Spinnaker allows us to define strategies for subscribing to the container registry and define pipelines for different environments. So in our case we can define a strategy to watch the version changes to lucifer , and 2 pipelines, one for staging and other for production, the staging pipeline will have trigger to deploy to Kubernetes as soon as a new image is published, but for production pipeline, we can put a manual trigger which waits for a QA/Business manager to deploy this feature once it is QA’ed on staging. So as soon as the container registry receives image lucifer-v1.2.0 the spinnaker agent triggers the staging pipeline and it deploys the app to Kubernetes cluster, where it can be QA’ed, once the QA passes, the business manager can trigger the Production pipeline and the image lucifer-v1.2.0 would be deployed to production in seconds. This process is quite straightforward and intuitive because of the tools like Spinnaker, Docker, Kubernetes. In short the pros of this approach is: ) It makes the job easier for DevOps team as they do not have to setup app specific dependencies as they always will be using the dockerized app. ) It reduces the build +deploy time greatly as we are reusing docker images with different environment variables, as build time for big apps are >30 minutes sometimes. References used: Docker Anti Patterns
https://medium.com/@y-mohit2316/build-ones-ci-deploy-multiple-times-for-modern-web-apps-using-docker-kubernetes-and-spinnaker-b0a133ef672a
['Mohit Yadav']
2020-12-09 18:57:48.955000+00:00
['Continuous Integration', 'Continuous Delivery', 'Web App Development', 'Technology', 'DevOps']
402
If Apple is seriously considering producing a car, it is further evidence that Silicon Valley is…
If Apple is seriously considering producing a car, it is further evidence that Silicon Valley is interested only in capitalising on the last century’s ideas, rather than genuinely addressing the challenges of today and tomorrow. Apple has made a success of waiting until a linked bundle of technologies mature before producing their most polished incarnation—but surely this particular product is done. It’s not that the car isn’t useful for many; clearly, after several generations of exclusionary policy, hard-coded into infrastructure, it’s not going anywhere fast. It’s just done, in terms of innovation, and declining, in terms of relevance. As I’ve said before, the car is like the horse—something for the weekend, but not a meaningful mobility technology anymore. (Here’s what to do instead.) Apple deploying their vast resources in this direction will lead only to a waste of focus and potential invention that would be better pointed elsewhere.
https://medium.com/@cityofsound/if-apple-is-seriously-considering-producing-a-car-it-is-further-evidence-that-silicon-valley-is-2ead1ed82458
['Dan Hill']
2020-12-26 22:19:04.641000+00:00
['Technology', 'Apple', 'Mobility', 'Cars']
403
Blockchain-based authentication of devices and people
Combining the power of blockchain technology and public key cryptography for secure authentication and identification of people and devices. The Internet of things (IoT) is the network of devices, vehicles, and home appliances that contain electronics, software, actuators, and connectivity which allows these things to connect, interact and exchange data. From airplanes, cars & drones to medical devices, robots, security cameras & smartphones, all aspects of our lives are touched by IoT devices. The number of IoT devices is likely to cross 200 billion by 2020. Primechain-API combines the power of blockchain technology with public key cryptography to enable: Secure authentication and identification of smart-phones, other devices & users Securing & encrypting online communications Password-less login systems Preventing fake emails & phishing Authenticating DNS records & preventing spoofing attacks Electronic signatures Blockchain-based authentication has some special features: Signing and decryption keys stay on the device. Verification and encryption keys are stored on the blockchain. Protects against critical cyber attacks such as phishing, man-in-the-middle, replay attacks. The steps: Step 1 — Retrieving the RSA public key of the verifier Step 2 — Encrypting the blockchain address of the requester Step 3 — Sending the encrypted blockchain address to the verifier Step 4 — Decrypting the encrypted blockchain address Step 5 — Retrieving the RSA public key of the requester Step 6 — Generating a random string and timestamp and hash Step 7 — Sending the hash to the requester Step 8 — Decryption of the hash Step 9 — Signing of the hash by the requester Step 10 — Creating the encrypted envelope Step 11 — Sending the encrypted envelope to the verifier Step 12 — Decrypting of the encrypted envelope by the verifier Step 13 — Verification of the digital signature 1. Onboarding a user A user can be an individual, a company, a device etc and can be on-boarded using post /api/v1/onboard_user and passing the following parameters: The user identifier e.g. name, account number, IMEI number, IP address, serial number, CIN etc. The user description { "user_identifier": "23792387", "user_description": "Samairah Nagpal's Noodle Bank app on her iPad" } The output is: the new user’s primechain address the new user’s primechain private key the new user’s primechain public key the relevant transaction id the new user’s RSA public key the new user’s RSA private key { "status": 200, "response": { "primechain_address": "16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA", "primechain_private_key": "VECSuwoYmF8LHpBy67eV6NbHqBYBuG74dEaE7435xUcS9FGQ48WdXfjc", "primechain_public_key": "021c2d2e89d68a7cd4505f2464c26d2ace9ced3433f57ac619a9c52fc907042309", "tx_id": "ef3f1cde044f0a16382230d4e700143da3ef5138a2bfb60791c05461a0ff1de2", "rsa_public_key": "-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiB49uOAIfw2nGsueboVL +CWPYqiYS/+pYrUjoCe3owwn395o68IPmYDm2NCYk17Dwx2bDW5/B4OFgznw+eSe Hg1K2RrCQbwArKssjez04VtVBODowv/h8usq6R/g1zsA/YtZTLHMR0tdr9Ton0Op GBbc/qsgAo76OJvGcy7dwCXcbzVkscjURiVQ8Grn+yvpS5DQW0fsklUoX6UO8esT lF1u+TFDaizMu5i1bl/CibiRzc5iT/E907ynBc2PZApcices8Eera8Ye8kGG2cz5 LIuf49OfTlq5zRQQLzrHSF5Tg4cVmxOH4XU3sbfPNxnvTVCBONq59LYvGsNBXuDH hQIDAQAB -----END PUBLIC KEY-----", "rsa_private_key": "-----BEGIN PRIVATE KEY----- MIIEvAIBADANBgkqhkiG9w0BAQEFAASCBKYwggSiAgEAAoIBAQCIHj244Ah/Daca y55uhUv4JY9iqJhL/6litSOgJ7ejDCff3mjrwg+ZgObY0JiTXsPDHZsNbn8Hg4WD OfD55J4eDUrZGsJBvACsqyyN7PThW1UE4OjC/+Hy6yrpH+DXOwD9i1lMscxHS12v 1OifQ6kYFtz+qyACjvo4m8ZzLt3AJdxvNWSxyNRGJVDwauf7K+lLkNBbR+ySVShf pQ7x6xOUXW75MUNqLMy7mLVuX8KJuJHNzmJP8T3TvKcFzY9kClyJx6zwR6trxh7y QYbZzPksi5/j059OWrnNFBAvOsdIXlODhxWbE4fhdText883Ge9NUIE42rn0ti8a w0Fe4MeFAgMBAAECggEAJvWw6OeGxwbbW3oIYM3aTq5BehWTcb09eDksdzym/Q4P o63/Deu/l0ojyM77vMKU+ZXRuWh1B2uHnWXKKVxcPXHEiJt2GmZ7MvDTkdPOy2ne zcSqGpYuz96rq4oqSrBiui9WYfNJ6uYRbLBd3Kf7ECALJQFJ6jGOQQGlLXaulb5U n5aYSsoEh+LcAgouI2Hu9AC+OCv3owynKL8HiWlLbR8yJJ5zaE9egiGWn/VbFu1g EA1wzVXyG2bIFghzt6bexg7DIDqsBOwLHD1lLE3WTgPCmDM2zoOxX5PlDKdFDVG8 Mnt/18FO5xiCpAC7ZMFA2Vd8udKvQd0LSxOVA8MZSQKBgQDVyKyXhLvt8KxepGhs /VmXn+AHLLL6w69Z3Vb1gj/oF8nZjJmN3ETcXKt861UqM23Q4WDhmDhRduWE1hXh elw3jVHkDMQn4Ke5Kv/rZ8aszm+fZKgCjB2AF7nUaJ+cU/oSWHFpk+/S/KZWiHWc mVkDxoBhXCi/6QyLLB8WJGwaewKBgQCi/1zwo/9z3CCKEn8baQkReUDmifaG4PVs ZEAsz0WQ/bphRBQllwoL6xXTTuGAczfoEonSyH+jfm2sLcysMRpyAAPI2Q//d+b2 AFm+eBH/45TGKvfD7klYj830sSzYaA6i/Z2DziG7fmC/auJmgtT1wQ5UMjD3Oghk lCxzn6oF/wKBgGyR/Gz6wQJG/xMVhd8MD2r8i6a5IbBOnwgRa69FVbVGF4G/cOBl pCcRfRn03gyPj87MFwqa5scgjdGXdAdNv/WKdLNPdHMYGbXlS5E+49ww/ulBEj4w 8G50HjDsbVrUHyUf+4D1248YNlWt+aTtEBLlxZ8sUZmc/nzTjHoPR0NvAoGAFSKr uIBrdWiLx5uSY8mA5YUlhz9IekDdUgrFz4mo6Z4c9tPPEPi+0sDO+bF2yCMokq0k tfJNqrOQIQ1nRsSvOy0JUJfk3Sl9B1UQTgRfwSCPgAq+SeeyFwu+lwYKXJ1RmIzu SdMGyLsgbHG9nbFFUACSjRRdCRG7WN9lzDBd6Z0CgYAeg3Be/Kox+anF2pRulDa3 Ro9UiVPrsJQv/0qKO8vLsJyDyCFmaut2v1XPQm1c50gaQ5Crn/a1IavgVyoU5Kl8 RQJsaS/5ZgA5Hm1XIdv6edCNn8bvFw6927aC5BsRuwFzPzplplfJ2fQRQpBtpJN+ E4ZylXnYyCN1ar3WSp6/eQ== -----END PRIVATE KEY-----" } } The following are automatically published to the AUTH_USERS_MASTERLIST datastream: new user’s identifier new user’s description new user’s primechain address new user’s primechain public key new user’s RSA public key The newly created user’s primechain private key and RSA private key are not published to any datastream. 2. Authentication Authentication is a 13-step process. The user being authenticated is referred to as ‘requester’ and the user carrying out the authentication is referred to as the ‘verifier’. In this guide, we presume that Noodle Bank is the verifier and Samairah Nagpal is the requester. Their primechain addresses are: 17SEyDKEwvA46U1FTUVVZWYgnR4X4L576vVAbp (Noodle Bank) and (Noodle Bank) and 16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA (Samairah) Optional — Use of symmetric encryption At any stage, you may optionally use AES symmetric encryption using post /api/v1/encrypt_data_aes and passing the data as the parameter: { "data":"I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times." } The output is the: AES password AES initialization vector The AES authentication tag The encrypted data { "status": 200, "response": { "aes_password": "o9tgRCETlHLZdNhlKKgdDshgiwvujn84", "aes_iv": "LdjZLovqIkL3", "aes_tag": { "type": "Buffer", "data": [ 210, 255, 136, 213, 61, 82, 117, 102, 222, 62, 93, 134, 245, 113, 100, 82 ], }, "encrypted_data_aes": "4896275f060be692d50406292602e6cb53a6d30426c11b0658a8dc31ed196ef4841ffa8b9c8d6315f8798387f93157aa35bb5d280bf208d2bc645e2e184f0ea551a372b924b329b391b6ecf75f3fec3a1760ae306de25d3bc36cc30bf93cc9e3988c743c6925f109b6760bca77826bfd7673563b99" } } To decrypt, use post /api/v1/decrypt_data_aes and pass the following parameters Encrypted data AES password AES initialization vector The AES authentication tag { "encrypted_data_aes": "4896275f060be692d50406292602e6cb53a6d30426c11b0658a8dc31ed196ef4841ffa8b9c8d6315f8798387f93157aa35bb5d280bf208d2bc645e2e184f0ea551a372b924b329b391b6ecf75f3fec3a1760ae306de25d3bc36cc30bf93cc9e3988c743c6925f109b6760bca77826bfd7673563b99", "aes_password": "o9tgRCETlHLZdNhlKKgdDshgiwvujn84", "aes_iv": "LdjZLovqIkL3", "aes_tag": { "type": "Buffer", "data": [ 210, 255, 136, 213, 61, 82, 117, 102, 222, 62, 93, 134, 245, 113, 100, 82 ] } } The output will be the decrypted data. { "status": 200, "response": "I fear not the man who has practiced 10,000 kicks once, but I fear the man who has practiced one kick 10,000 times." } Step 1 — Retrieving the RSA public key of the verifier Samairah retrieves Noodle Bank’s ‘RSA public key’ using post /api/v1/get_rsa_key and passing Noodle's 'primechain address' as a parameter. { "primechain_address": "17SEyDKEwvA46U1FTUVVZWYgnR4X4L576vVAbp" } The output is Noodle Bank’s ‘RSA public key’. { "status": 200, "rsa_public_key": "-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkydbbI+68zjRmp0n7Yss NwKbUl1IzBEqgm0Rp/utue8VNPfZaW7YrnwmEO7jO939C0/xAgayE6vR5VT7sItX uMKwvP0DozxWtUGGcoHEZgImzSXJGomZpr2+M6TdW+kbisUUKbjIApQvnGlh93Zv XiRTsvMkxC1Lf8Wkj52V7Xdn7O2p1tGg/j4wv78kT9wJ67xEnBmsGpGUZZYPAMZr j0WrsakvT5vqwtkGum2OI9eRNlB7qgDsuOrxAm3jyx17s+tOi2Sasn1GywHQmU6n YpCSsVv6ywGCMH5xLGAWT3glGCx2mwjAi+/QbpSXIWorlzzlZOR2xI+844dyDxbW MQIDAQAB -----END PUBLIC KEY-----" } Step 2 — Encrypting the blockchain address of the requester Samairah encrypts her own ‘primechain address’ with Noodle Bank’s ‘RSA public key’. This is done using post /api/v1/encrypt_data_rsa and passing the following parameters: data , which is Samairah's 'primechain address' rsa_public_key of Noodle Bank { "data": "16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA", "rsa_public_key": "-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkydbbI+68zjRmp0n7Yss NwKbUl1IzBEqgm0Rp/utue8VNPfZaW7YrnwmEO7jO939C0/xAgayE6vR5VT7sItX uMKwvP0DozxWtUGGcoHEZgImzSXJGomZpr2+M6TdW+kbisUUKbjIApQvnGlh93Zv XiRTsvMkxC1Lf8Wkj52V7Xdn7O2p1tGg/j4wv78kT9wJ67xEnBmsGpGUZZYPAMZr j0WrsakvT5vqwtkGum2OI9eRNlB7qgDsuOrxAm3jyx17s+tOi2Sasn1GywHQmU6n YpCSsVv6ywGCMH5xLGAWT3glGCx2mwjAi+/QbpSXIWorlzzlZOR2xI+844dyDxbW MQIDAQAB -----END PUBLIC KEY-----" } The output is the encrypted value referred to as encrypted_data_rsa : { "status": 200, "encrypted_data_rsa": "acN4z1AbYKHbuK5Tixi+AgYwg/3XMqVxU3UJmZrXcRuSXYSPyDLrB7+BQeiazfcFk9WxpnvT8nXHkQ6Hz2rTUF1K1Lv5XM33iQMqdRUa9WzQGJS9IakS5TSw+OpxhCR0KWa1kJ4XIa6QHwCGqUQrUo7WXTV9k/Lb55eLZh9bINy6LAAeYQfQX7LZMVCuC7lmJcUAkDTYuccgZdtAc1BCHl0ODq7rcMSLpr/M0h+tjKE6fuGP9AuB7NznoAy+7yf9toy67DNIWAeQXptTq8ukBJ6AzBTerUbTrbwOWlBWOyVcnsyPkXRtPUNryu5Jvqlw6//w0Fc9FG3dM+lmuzWQ5A==" } Step 3 — Sending the encrypted blockchain address to the verifier Samairah sends the encrypted_data_rsa to Noodle Bank. Step 4 — Decrypting the encrypted blockchain address Noodle Bank uses post /api/v1/decrypt_data_rsa to decrypt the encrypted_data_rsa . The parameters are: Noodle Bank’s RSA private key The encrypted_data_rsa { "rsa_private_key": "-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCTJ1tsj7rzONGa nSftiyw3AptSXUjMESqCbRGn+6257xU099lpbtiufCYQ7uM73f0LT/ECBrITq9Hl VPuwi1e4wrC8/QOjPFa1QYZygcRmAibNJckaiZmmvb4zpN1b6RuKxRQpuMgClC+c aWH3dm9eJFOy8yTELUt/xaSPnZXtd2fs7anW0aD+PjC/vyRP3AnrvEScGawakZRl lg8AxmuPRauxqS9Pm+rC2Qa6bY4j15E2UHuqAOy46vECbePLHXuz606LZJqyfUbL AdCZTqdikJKxW/rLAYIwfnEsYBZPeCUYLHabCMCL79BulJchaiuXPOVk5HbEj7zj h3IPFtYxAgMBAAECggEAY8GujK3zQqcmEPaw9qv+UVyHBxMOIqkQdFKUQZiwcPfP HJVY4cyvP7oR5DDOAuu+e0i6TXFUj1lPdXRjG4+a7Dmvrq6nJKXm8gF1r3KhPbX/ r9sJtd/KNeszYbdGCOTCMxTfUlld3cGvdQ1LyIKVhPCDfTCvn/5EzF2j7WgbF1tm oKuZB69LoVRSQ+rW9egQUWX5OCIC2aPReoRQCpPW3hz+CCuxk387twqlbS4/YFlB fdzC8N80umFvRFB8+YrgLrE/AM+dfFf8XMbwQDO13V4E6S5zVohAAJddxq7Nsv+e 1aZK+3NxlrkrOFij0ApLVtugToIBIMsGKbXuc5g6sQKBgQD9VIXvkX9fvr7wFgQ8 BDqDwavhUfQ3GdsZgzEnLK4SUgB1ApC7xMgwXauN38AL3kZEqQZNnWciQY8bU3i1 EFdFQ7K9n7s8nM/d8N/rbFIndRICJUQh47UAKWNZRaCV/IVMLPVjjHCjaej9aOUP JsyqbGfFA62rRCXoCHSzVjlFfwKBgQCUtF+PA0kP/V8CkaNV0VdRa702s57tu6Tg Quk8SOH4Ame9TcBrP95bpxUzBKapBj/ncW8lJKDD7zLYTQalWUG+KX/17i6NPgD6 vq+FwoCaGskRQTw3AUbkfHj6u7Cn41EmvxIZ2KWLi8Hl4+W3o6/mnsuucEZDOPwy FTN5FW+cTwKBgG0RTvjt86ENRrenQvtz9p1zbMT9u99dSm+ZhDgRjIBmvbui9x1g g7APJCVZCB4T/Lzi6MvR0O12vF5PedC60FgJ5ZKuirZ17Sjo4/9AC77hMHesA8Fz gCIpr5Rn3dO1fM5nLN9HP9ebaaxw1O3JDqTxN1wjUUpDdO6JdXUg0leRAoGBAI4l FSspos+MDSPxf0ZrQ6Jq8IW3kXYCZoqQq06bBJYEBpIoHoTmmnDV+Ce6jG0JslBU WEATETH6Foo4pt+rwHI8TTsSoKEW4ezOFg4wbKnibMz3pM2XhOKoMSTMAQObAVme T3kxZJ1NzN0pyc6Ow3gZ1u06GY/sivZ82aUm3nd1AoGAIqGOv9GiNYWJIdsdGmhI YS93Qj3Pw6ZSTqGTW4FYM9f4tawEWaGFGBL2CBYEp9nUTUBEAq8HJes0bimeScGn Tawewg84U4oiHuyTbtwIi5PkB+XIKfGaXU3SMaHYHiORRe7BhQwWKHpLdob4JJtm CdNBuN+I1w9yaWG1TeWVjk8= -----END PRIVATE KEY-----", "encrypted_data_rsa": "acN4z1AbYKHbuK5Tixi+AgYwg/3XMqVxU3UJmZrXcRuSXYSPyDLrB7+BQeiazfcFk9WxpnvT8nXHkQ6Hz2rTUF1K1Lv5XM33iQMqdRUa9WzQGJS9IakS5TSw+OpxhCR0KWa1kJ4XIa6QHwCGqUQrUo7WXTV9k/Lb55eLZh9bINy6LAAeYQfQX7LZMVCuC7lmJcUAkDTYuccgZdtAc1BCHl0ODq7rcMSLpr/M0h+tjKE6fuGP9AuB7NznoAy+7yf9toy67DNIWAeQXptTq8ukBJ6AzBTerUbTrbwOWlBWOyVcnsyPkXRtPUNryu5Jvqlw6//w0Fc9FG3dM+lmuzWQ5A==" } The output is the decrypted data, which in this case is Samairah’s ‘primechain address’. { "status": 200, "decrypted_data": "16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA" } Step 5 — Retrieving the RSA public key of the requester Noodle Bank retrieves Samairah’s ‘RSA public key’ using post /api/v1/get_rsa_key and passing Samairah's 'primechain address' as a parameter. { "primechain_address": "16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA" } The output is the ‘RSA public key’ of the requester. { "status": 200, "rsa_public_key": "-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAiB49uOAIfw2nGsueboVL +CWPYqiYS/+pYrUjoCe3owwn395o68IPmYDm2NCYk17Dwx2bDW5/B4OFgznw+eSe Hg1K2RrCQbwArKssjez04VtVBODowv/h8usq6R/g1zsA/YtZTLHMR0tdr9Ton0Op GBbc/qsgAo76OJvGcy7dwCXcbzVkscjURiVQ8Grn+yvpS5DQW0fsklUoX6UO8esT lF1u+TFDaizMu5i1bl/CibiRzc5iT/E907ynBc2PZApcices8Eera8Ye8kGG2cz5 LIuf49OfTlq5zRQQLzrHSF5Tg4cVmxOH4XU3sbfPNxnvTVCBONq59LYvGsNBXuDH hQIDAQAB -----END PUBLIC KEY-----" } Step 6 — Generating a random string and timestamp and hash Noodle Bank generates a 512-character random string and the current GMT timestamp. For this it uses get /api/v1/create_string_timestamp and gets an output like: { "status": 200, "response": { "string": "mroHlYTyC5gtYD3NcvS0vKG0rIrQOOF1DmdfMl5NCFBg7Xi0Yf9xsKSN987OUwcnSKfoGn6qZAOg7m87C3BzgpGL8WYcPlLfDgJhBl88THhfhGjIxnkOdrGtiiWzWBdRbQvRsj9FBc5q6kPQHJ8tXuZt3LRnN09JTEAMDZWoVj0ChTAn8AjN8sYbiIpTmEZRHoFGz1bvVNSlkEBNd2ju9G10MkkFcsezOpckT6pityTGvVPvYS5Zh8MjLOP7KhoW1GTYb7mRyMCsoJso6yxQ1Q2TRZOq4N0BMeajUBaLeubMpk7Dq1qado9Dk6xSvLFkBzsnj3BO7rT5T6M2zB8j8Zusf90vtu8e51CJNG1tbiBFXijuiQJt51Qi6sXYaCAX89FNmXDKCICNAlwnuttYc8N5AgYoUtnFBVQ4Uxsu3gueNlRq9qOFrD9LZ3P8dnVv7oCng1CjK4jNELUOzQFfFtikGu1j5OOw5IcqGNXhDGalUzQZ2eqWpiznzheE8W6a", "timestamp": 1544601150, "hash": "e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82" } } Note: The hash is the SHA-512 hash of string^timestamp The string and timestamp are stored in Noodle Bank's private database. The hash is encrypted with Samairah's RSA public key and sent to her. This is done using post /api/v1/encrypt_data_rsa and passing the following parameters: data , which is the hash rsa_public_key of Samairah { "data": "e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82", "rsa_public_key": "-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAjAcsDs6uRwW2EpqRwmOt ZYXPID+rWvcV3ZKmGUr4mUictf/RVk0/o8Q/pGbeeKqfNXTBfvAh4ZcaFmXP3p98 5JtSXS/tDw8/6nkZjLHbb3RDYSGhwMzd5DAkNILd0wwqh1QCdrlda7n4ZXrU77dg CRoWBPwUY8ItZSgOHFhdmU7acmI4iNNPKRfDPzJ5PefquqFrKQCbK7yVL1DnKPX7 56XowLHpYrutlYr9ZaOgZKWaA6NR8ThVVdyhTpU3FhigwdR5UGtAjVU08Ot14H9F 6rUyqtHINjfTQyMxDYiwVmfcAW0UJ1Nsp/ZkBOAUG+FOymHtYTsNbtacMkPi2O16 WwIDAQAB -----END PUBLIC KEY-----" } The output is the encrypted value referred to as encrypted_data_rsa : { "status": 200, "encrypted_data_rsa": "HuQ6xjyEN9nayNAHg34enp89It2e10iew0mh3zyc7ZSMtz/i6weuai+PrLdR+LL3KbITQSJZHA1XDkKNvz7GmV6eDSX7meZWMXQJDVpBYeQYV+v38bGlAFOTmGosCeaK+CeLFYPkC6J1SKulHU07hSNdO4BGkek0/au87ztK2RLo5E7qUjGOtxbfu5zMQEYBghNhXVxfD2jO+Jr8zzkoRPeq7yfKtYtjtR68u6emIkRJIh08KHXZarplHFeDnaJ1onqZMWYB0BbBSJDVt06S4PFzceMk5pdF5MwUe9y8OSzdeD0jsmGfG/YLEtpNSc2IxCO0OtYgw3XbWbnbZvVntA==" } Step 7 — Sending the hash to the requester Noodle Bank sends the encrypted_data_rsa to Samairah. Step 8 — Decryption of the hash Samairah decrypts the encrypted value using her own RSA private key. Samairah uses post /api/v1/decrypt_data_rsa to decrypt the encrypted_data_rsa . The parameters are: Samairah’s RSA private key The encrypted_data_rsa { "rsa_private_key": "-----BEGIN PRIVATE KEY----- MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQCMBywOzq5HBbYS mpHCY61lhc8gP6ta9xXdkqYZSviZSJy1/9FWTT+jxD+kZt54qp81dMF+8CHhlxoW Zc/en3zkm1JdL+0PDz/qeRmMsdtvdENhIaHAzN3kMCQ0gt3TDCqHVAJ2uV1rufhl etTvt2AJGhYE/BRjwi1lKA4cWF2ZTtpyYjiI008pF8M/Mnk95+q6oWspAJsrvJUv UOco9fvnpejAseliu62Viv1lo6BkpZoDo1HxOFVV3KFOlTcWGKDB1HlQa0CNVTTw 63Xgf0XqtTKq0cg2N9NDIzENiLBWZ9wBbRQnU2yn9mQE4BQb4U7KYe1hOw1u1pwy Q+LY7XpbAgMBAAECggEBAIE2YQ5s496/w2pZXaVuC2SCXEYa2ol/NZsXptPyHYJ8 wbckD8y+TbpV7pBLKIFamL1gNulmty0PHNCMNIvuyfW96fv5rJvX65f3FX+B48Zu F3r66OMbaKoXTmFyXTVRpbDo8bkShcVRf4hNF074/NKJUsZYwovnc7JDivnXBM/g z1KtiR+IzUUBjO/6Vsax6Fdw30tzBhV18serm0Kqa1Fgk8gJrrnfWHm7aQQUj+PE BhwMtFGU89p1bJ4CML9x/NC5aPjpQSg1XfWbB3vn/K2Iea498q+jjSlOLrhGef4e MydqPMwiQ3UxkYvO431kB5UITxZHf9mYm7ucY02hnNkCgYEA8CqQFL1B2ARMVPqp hEgDk6CAfE9kkMmC5G9yBXitfYoT9BirIAhYjiXtoiXVrY8Wv0fsArF/Re7DtgeP qBkeeYLz5TKncVVbSuFak6xpA2DomKDUTFCvZf5A0pcNbclO85Lt1AbbLtBkOmaY V8dZVDdFC0W2PfUSpFMnusXLeOcCgYEAlUKDWVX4tWj2Pr58nBxWu4ySMTHlFNVT HjMGA17jsrNrQFRw3+aWxjamTEUvxczvegj2o+XpQLaG7hCz46t0yWOrvvEVPgpp njlVo+Zx1mZwZO1FESE3FENtNTeeScZOvE6OZ39xyalF2BlNY9BSxbx1YHbz0wog dyuFqxP9AG0CgYEAhOG36JGyrbfrGBW812EVAYiLrrwq02V7k7MJ7ncP3ucYFTXU 8RtNDBF2QwIWETqbhmhf8DmPRv1NshjK5mJHl0nacpUtSirFIVBA0nZRgDoNV+2c qUD3W0JrUVmcZ4M7uM8x4d+NRICvBBUh82dsSIlwHUWMCQnhL6SG1fN8mj8CgYBg OehKUPWtpsSSQOY/EroL4Z1iX+NrYfhbENQAmk07qRYs/ANlJdjwYs0lgLlC0VNW nq67jX6qPMKSemwvDBuXtk1EJVLnk3jyC86dgvTDH3m4Z4tOdtZ2lt2yIHBI6bNb DV4bdXtbYmjn37AX+HdUiaI2lZmt5ep2SbW8TH+gpQKBgG3j7+t5N9wsCMv0JuS3 k1Kx2gY2tJfn3+JWd5FNPh5HUwD+gkJyuX2sJSKk8o3xHvxcM/2SCHoqLHUdXbQ9 q0AEVEwy6XV1wsJ+6ASoA+CbcL3mllIbB1E+fYiA4km9RabHwTbh82rogsOUMPXm 9qVefyQneafYs+zz1RuX0ya3 -----END PRIVATE KEY-----", "encrypted_data_rsa": "HuQ6xjyEN9nayNAHg34enp89It2e10iew0mh3zyc7ZSMtz/i6weuai+PrLdR+LL3KbITQSJZHA1XDkKNvz7GmV6eDSX7meZWMXQJDVpBYeQYV+v38bGlAFOTmGosCeaK+CeLFYPkC6J1SKulHU07hSNdO4BGkek0/au87ztK2RLo5E7qUjGOtxbfu5zMQEYBghNhXVxfD2jO+Jr8zzkoRPeq7yfKtYtjtR68u6emIkRJIh08KHXZarplHFeDnaJ1onqZMWYB0BbBSJDVt06S4PFzceMk5pdF5MwUe9y8OSzdeD0jsmGfG/YLEtpNSc2IxCO0OtYgw3XbWbnbZvVntA==" } The output is the decrypted data, which in this case is the hash. { "status": 200, "decrypted_data": "e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82" } Step 9 — Signing of the hash by the requester Samairah signs the hash using her ‘primechain private key’. For this she uses post /api/v1/create_signature and passes the following parameters: The data to be signed (hash in this case) Samairah’s private key { "data": "e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82", "primechain_private_key": "VEjwBWC6mgp8CNFPguZwt997rB5JCExQzB7wavwgKSZimBmaeKwPWdKH" } The output is the digital signature. { "status": 200, "signature": "H60blR7quU3GD5S3Apdh1uqM7ydEzMNRiQNAtKi6uOkdMhO14bjZpopURvbCoRKCsXusBRL7Yx03QiW0lvHO554=" } Step 10 — Creating the encrypted envelope Samairah creates an ‘encrypted envelope’ containing the following: the digital signature Samairah’s primechain address the data that has been signed (hash in this case) This is done using post /api/v1/encrypt_data_rsa and passing the following parameters: data (signature, primechain address, and hash) rsa_public_key of Noodle Bank { "data": { "signature":"H60blR7quU3GD5S3Apdh1uqM7ydEzMNRiQNAtKi6uOkdMhO14bjZpopURvbCoRKCsXusBRL7Yx03QiW0lvHO554=", "primechain_address":"16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA", "data":"e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82" }, "rsa_public_key": "-----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAkydbbI+68zjRmp0n7Yss NwKbUl1IzBEqgm0Rp/utue8VNPfZaW7YrnwmEO7jO939C0/xAgayE6vR5VT7sItX uMKwvP0DozxWtUGGcoHEZgImzSXJGomZpr2+M6TdW+kbisUUKbjIApQvnGlh93Zv XiRTsvMkxC1Lf8Wkj52V7Xdn7O2p1tGg/j4wv78kT9wJ67xEnBmsGpGUZZYPAMZr j0WrsakvT5vqwtkGum2OI9eRNlB7qgDsuOrxAm3jyx17s+tOi2Sasn1GywHQmU6n YpCSsVv6ywGCMH5xLGAWT3glGCx2mwjAi+/QbpSXIWorlzzlZOR2xI+844dyDxbW MQIDAQAB -----END PUBLIC KEY-----" } The output is the encrypted value referred to as encrypted_data_rsa : { "status": 200, "encrypted_data_rsa": "R15SEiQtvY2QpxAY5MMXq2ar51ByO2y8TQcQtG9cge7SDPxT2cx8pQ8JcC89DVZuifLv4n6BGPtl/iIJDRtbyW+lEci+iVIGkOo5wM+Qz4PxIUUWGYRMGDPv2/K7tOHfEInNw+fjM5oS1NJRsq+ofjCxi3BN4hw7MW27BB8PCUVnKMMxLNL6fWuszqFIFVFPltH2r0sotwB1xyWUUNOZiZYIYgCMUwuoMhxRiDryR6vvQaadu774W4w2GfT63lHNkCcqK4KlQKovR6RP6XTh/1jBw8O0FHtnE0hVsheINFTGb6M3fQv48mbUzHt3e627IoQJ8DAcJoE6VXhxA5J1XzmIo48pHWgtvSOzt+Nti4kF8lTszlMb1cttdaG6wsJykqcSjW0NDD6wh2/4h3g7u45TzcgPpw8Ixmf6HFUtB1VP+P5EQuA2kR/ZbQSFgo1CUw0LhMzQ3+xnbDnZkS/TWbuwKtd9RcUQ9HEFdn3xFzrvlBQ3bqwfMXMzC4Y6NVhtV+UR1cQayteJiYTuea0XLOhNjerFCDJlG6tcinHIsNw+CSLKFrjjWo97v5U7llb+bUpAP1xYRmS3EPVzP1Eu00ymhNlyUdIVZbI48si2p0ePhorE2S8bSOnOOBt6QW9rOSG4VXAVBjUrHOw/nNIwWoYXXs6oSvOCykcelRNjMF0=" } Step 11 — Sending the encrypted envelope to the verifier Samairah sends the encrypted envelope to Noodle Bank. Step 12 — Decrypting of the encrypted envelope by the verifier Noodle Bank uses post /api/v1/decrypt_data_rsa to decrypt the encrypted_data_rsa . The parameters are: Noodle Bank’s RSA private key The encrypted_data_rsa { "rsa_private_key": "-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQCTJ1tsj7rzONGa nSftiyw3AptSXUjMESqCbRGn+6257xU099lpbtiufCYQ7uM73f0LT/ECBrITq9Hl VPuwi1e4wrC8/QOjPFa1QYZygcRmAibNJckaiZmmvb4zpN1b6RuKxRQpuMgClC+c aWH3dm9eJFOy8yTELUt/xaSPnZXtd2fs7anW0aD+PjC/vyRP3AnrvEScGawakZRl lg8AxmuPRauxqS9Pm+rC2Qa6bY4j15E2UHuqAOy46vECbePLHXuz606LZJqyfUbL AdCZTqdikJKxW/rLAYIwfnEsYBZPeCUYLHabCMCL79BulJchaiuXPOVk5HbEj7zj h3IPFtYxAgMBAAECggEAY8GujK3zQqcmEPaw9qv+UVyHBxMOIqkQdFKUQZiwcPfP HJVY4cyvP7oR5DDOAuu+e0i6TXFUj1lPdXRjG4+a7Dmvrq6nJKXm8gF1r3KhPbX/ r9sJtd/KNeszYbdGCOTCMxTfUlld3cGvdQ1LyIKVhPCDfTCvn/5EzF2j7WgbF1tm oKuZB69LoVRSQ+rW9egQUWX5OCIC2aPReoRQCpPW3hz+CCuxk387twqlbS4/YFlB fdzC8N80umFvRFB8+YrgLrE/AM+dfFf8XMbwQDO13V4E6S5zVohAAJddxq7Nsv+e 1aZK+3NxlrkrOFij0ApLVtugToIBIMsGKbXuc5g6sQKBgQD9VIXvkX9fvr7wFgQ8 BDqDwavhUfQ3GdsZgzEnLK4SUgB1ApC7xMgwXauN38AL3kZEqQZNnWciQY8bU3i1 EFdFQ7K9n7s8nM/d8N/rbFIndRICJUQh47UAKWNZRaCV/IVMLPVjjHCjaej9aOUP JsyqbGfFA62rRCXoCHSzVjlFfwKBgQCUtF+PA0kP/V8CkaNV0VdRa702s57tu6Tg Quk8SOH4Ame9TcBrP95bpxUzBKapBj/ncW8lJKDD7zLYTQalWUG+KX/17i6NPgD6 vq+FwoCaGskRQTw3AUbkfHj6u7Cn41EmvxIZ2KWLi8Hl4+W3o6/mnsuucEZDOPwy FTN5FW+cTwKBgG0RTvjt86ENRrenQvtz9p1zbMT9u99dSm+ZhDgRjIBmvbui9x1g g7APJCVZCB4T/Lzi6MvR0O12vF5PedC60FgJ5ZKuirZ17Sjo4/9AC77hMHesA8Fz gCIpr5Rn3dO1fM5nLN9HP9ebaaxw1O3JDqTxN1wjUUpDdO6JdXUg0leRAoGBAI4l FSspos+MDSPxf0ZrQ6Jq8IW3kXYCZoqQq06bBJYEBpIoHoTmmnDV+Ce6jG0JslBU WEATETH6Foo4pt+rwHI8TTsSoKEW4ezOFg4wbKnibMz3pM2XhOKoMSTMAQObAVme T3kxZJ1NzN0pyc6Ow3gZ1u06GY/sivZ82aUm3nd1AoGAIqGOv9GiNYWJIdsdGmhI YS93Qj3Pw6ZSTqGTW4FYM9f4tawEWaGFGBL2CBYEp9nUTUBEAq8HJes0bimeScGn Tawewg84U4oiHuyTbtwIi5PkB+XIKfGaXU3SMaHYHiORRe7BhQwWKHpLdob4JJtm CdNBuN+I1w9yaWG1TeWVjk8= -----END PRIVATE KEY-----", "encrypted_data_rsa": "R15SEiQtvY2QpxAY5MMXq2ar51ByO2y8TQcQtG9cge7SDPxT2cx8pQ8JcC89DVZuifLv4n6BGPtl/iIJDRtbyW+lEci+iVIGkOo5wM+Qz4PxIUUWGYRMGDPv2/K7tOHfEInNw+fjM5oS1NJRsq+ofjCxi3BN4hw7MW27BB8PCUVnKMMxLNL6fWuszqFIFVFPltH2r0sotwB1xyWUUNOZiZYIYgCMUwuoMhxRiDryR6vvQaadu774W4w2GfT63lHNkCcqK4KlQKovR6RP6XTh/1jBw8O0FHtnE0hVsheINFTGb6M3fQv48mbUzHt3e627IoQJ8DAcJoE6VXhxA5J1XzmIo48pHWgtvSOzt+Nti4kF8lTszlMb1cttdaG6wsJykqcSjW0NDD6wh2/4h3g7u45TzcgPpw8Ixmf6HFUtB1VP+P5EQuA2kR/ZbQSFgo1CUw0LhMzQ3+xnbDnZkS/TWbuwKtd9RcUQ9HEFdn3xFzrvlBQ3bqwfMXMzC4Y6NVhtV+UR1cQayteJiYTuea0XLOhNjerFCDJlG6tcinHIsNw+CSLKFrjjWo97v5U7llb+bUpAP1xYRmS3EPVzP1Eu00ymhNlyUdIVZbI48si2p0ePhorE2S8bSOnOOBt6QW9rOSG4VXAVBjUrHOw/nNIwWoYXXs6oSvOCykcelRNjMF0=" } The output is the decrypted data referred to as decrypted_data { "status": 200, "decrypted_data": " { "signature":"H60blR7quU3GD5S3Apdh1uqM7ydEzMNRiQNAtKi6uOkdMhO14bjZpopURvbCoRKCsXusBRL7Yx03QiW0lvHO554=", "primechain_address":"16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA", "data":"e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82" }" } Step 13 — Verification of the digital signature Noodle Bank verifies the digital signature using post /api/v1/verify_signature and passes the following parameters: The data to be verified Samairah’s primechain address The signature { "data":"e7d0265f614a53dada9ecbe2555930cb4cfc3fc1accf6afb18b35a3619baff59f0dce692edcd51e2284fb8f72cecaf75a6ab5e6cdbfabafcbc1020f606d6fd82", "primechain_address":"16g8d8U3K3PbcvY6LYafgZ23JrnxRXwZqBP8sA", "signature":"H60blR7quU3GD5S3Apdh1uqM7ydEzMNRiQNAtKi6uOkdMhO14bjZpopURvbCoRKCsXusBRL7Yx03QiW0lvHO554=" } The output will be true if the signature is valid and false if the signature is invalid or if an error occurs. { "status": 200, "response": true } { "status": 200, "response": false } If the signature is valid, the entity is verified. Noodle Bank also uses the stored timestamp to confirm that the signature is returned within a pre-set interval e.g. 30 seconds. Note: For the updated details, see: https://github.com/Primechain/primechain-api-docs/blob/master/docs/Authentication.MD
https://medium.com/blockchain-blog/blockchain-based-authentication-of-devices-and-people-c7efcfcf0b32
['Rohas Nagpal']
2018-12-17 14:57:02.089000+00:00
['Blockchain', 'Security', 'Blockchain Application', 'Blockchain Technology']
404
Distributed, Trustless Timestamps
Distributed, Trustless Timestamps How adding the blockchain creates advantages over traditional PKI techniques. For some engineers, it’s galling to see the blockchain being used as a notary to log non-financial transactions, particularly when Public Key Infrastructure (PKI) seems to have already solved the problems for making verified claims. While PKI alone could be used, we found that adding the blockchain provides a technical missing piece that results in better characteristics for both Issuers and Recipients of digital credentials. In 2016, when we began collaborating on the Blockcerts standard, the cornerstone of the project was to create a recipient-centric approach for issuing official records, consistent with the principles of self-sovereign identity. This resulted in digital credentials needing to have the following characteristics: Independence: The recipient owns the credential, and does not require the issuer or 3rd party to be involved after receiving the credential. The recipient owns the credential, and does not require the issuer or 3rd party to be involved after receiving the credential. Ownership: The recipient may prove ownership of the credential The recipient may prove ownership of the credential Control: The recipient has control over how they curate credentials they own. They may choose to associate credentials with an established profile they own, or not. The recipient has control over how they curate credentials they own. They may choose to associate credentials with an established profile they own, or not. Verifiability: The credential must be verifiable by 3rd parties, even within a trustless environment. The credential must be verifiable by 3rd parties, even within a trustless environment. Permanence: The credential must be a permanent record that can reliably last a lifetime. At the same time, we learned the following characteristics are needed for the credentialing system to be useful to an Issuer: Proof: Issuer must be able to prove they issued the credential. Issuer must be able to prove they issued the credential. Expiration: Issuer must be able to set expiration date of credential. Issuer must be able to set expiration date of credential. Revocation: Issuer must be able to revoke credentials. Issuer must be able to revoke credentials. Security: The system is secure and imposes minimal ongoing burden. Additionally, if the credential is going to have utility for 3rd parties, they must be convinced of the credential’s veracity: Integrity: The content of the credential hasn’t been altered. The content of the credential hasn’t been altered. Authenticity: The Issuer is who the certificate claims and has not been forged. The Timestamp and Key Rotation A reliable timestamp is clearly important in the case of a credential that expires, but an independently stored timestamp is also critical for Issuers to rotate their issuing keys — a security requirement. Verifying a credential requires checking that it originated from a particular Issuer while that issuing key was valid. This requires knowledge of the timestamp beyond anything written into the credential itself. Why? If a private key is ever compromised, nothing prevents an attacker from issuing fake credentials and backdating in the content. Even if an Issuer publicly revoked those fake credentials, an independent verifier would not know the difference between a valid and invalid credential, unless there were some reliable source of when the transaction took place. In traditional PKI techniques, this could be done through use of a time stamping authority (TSA), but that places a dependency on a trusted third party. On the other hand, the blockchain provides a permanent and trusted timestamp by design. To undermine this timestamp would require massive computational effort — rewriting the entire blockchain — to tamper with data before a certain point. So, the blockchain provides an independent timestamp for when each credential was conferred to a Recipient. This ultimately gives Issuers the ability to rotate their issuing keys without undermining the ability of 3rd parties to reliably verify transactions. Furthermore, the blockchain is a distributed ledger that does not depend on any trusted party like a Certificate Authority. The effect is improved availability, the capacity to independently verify, and redundancy that avoids single points of failure. Overall, the blockchain offers promising enhancements over traditional PKI techniques which help reach security goals while enabling individuals to hold their own official records, independent of any authority.
https://medium.com/learning-machine-blog/trusted-timestamps-bbeb3d29cc0
['Learning Machine Is Now Hyland Credentials']
2017-03-26 21:02:46.917000+00:00
['Technology', 'Cryptocurrency', 'Software Development', 'Blockchain', 'Bitcoin']
405
S3,E12 || Star Trek: Discovery (Series 3, Episode 12) Online 1080p-HD
⭐ Watch Star Trek: Discovery Season 3 Episode 12 Full Episode, Star Trek: Discovery Season 3 Episode 12 Full Watch Free, Star Trek: Discovery Episode 12,Star Trek: Discovery CBS All Access, Star Trek: Discovery Eps. 12,Star Trek: Discovery ENG Sub, Star Trek: Discovery Season 3, Star Trek: Discovery Series 3,Star Trek: Discovery Episode 12, Star Trek: Discovery Season 3 Episode 12, Star Trek: Discovery Full Streaming, Star Trek: Discovery Download HD, Star Trek: Discovery All Subtitle, Watch Star Trek: Discovery Season 3 Episode 12 Full Episodes Film, also called movie, motion picture or moving picture, is a visual art-form used to simulate experiences that communicate ideas, stories, perceptions, feelings, beauty, or atmosphere through the use of moving images. These images are generally accompanied by sound, and more rarely, other sensory stimulations.[12] The word “cinema”, short for cinematography, is ofCBS All Access used to refer to filmmaking and the film Star Trek: Discovery, and to the art form that is the result of it. ❏ STREAMING MEDIA ❏ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the process of delivering or obtaining media in this manner.[clarification needed] Streaming refers to the delivery method of the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies specifically to telecommunications networks, as most of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio CDs). There are challenges with streaming conCBS All Accesst on the Internet. For example, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of the conCBS All Accesst. And users lacking compatible hardware or software systems may be unable to stream certain conCBS All Accesst. Live streaming is the delivery of Internet conCBS All Accesst in real-time much as live television broadcasts conCBS All Accesst over the airwaves via a television signal. Live internet streaming requires a form of source media (e.g. a video camera, an audio interface, screen capture software), an encoder to digitize the conCBS All Accesst, a media publisher, and a conCBS All Accesst delivery network to distribute and deliver the conCBS All Accesst. Live streaming does not need to be recorded at the origination point, although it frequently is. Streaming is an alternative to file downloading, a process in which the end-user obtains the entire file for the conCBS All Accesst before watching or lisCBS All Accessing to it. Through streaming, an end-user can use their media player to start playing digital video or digital audio conCBS All Accesst before the entire file has been transmitted. The term “streaming media” can apply to media other than video and audio, such as live closed captioning, ticker tape, and real-time text, which are all considered “streaming text”. ❏ COPYRIGHT CONCBS All AccessT ❏ Copyright is a type of intellectual property that gives its owner the exclusive right to make copies of a creative work, usually for a limited time.[12][12][12][12][12] The creative work may be in a literary, artistic, educational, or musical form. Copyright is inCBS All Accessded to protect the original expression of an idea in the form of a creative work, but not the idea itself.[12][12][12] A copyright is subject to limitations based on public interest considerations, such as the fair use doctrine in the United States. Some jurisdictions require “fixing” copyrighted works in a tangible form. It is ofCBS All Access shared among multiple authors, each of whom holds a set of rights to use or license the work, and who are commonly referred to as rights holders.[citation needed][12][3][3][3] These rights frequently include reproduction, control over derivative works, distribution, public performance, and moral rights such as attribution.[3] Copyrights can be granted by public law and are in that case considered “territorial rights”. This means that copyrights granted by the law of a certain state, do not exCBS All Accessd beyond the territory of that specific jurisdiction. Copyrights of this type vary by country; many countries, and sometimes a large group of countries, have made agreements with other countries on procedures applicable when works “cross” national borders or national rights are inconsisCBS All Accesst.[3] Typically, the public law duration of a copyright expires 3 to 12 years after the creator dies, depending on the jurisdiction. Some countries require certain copyright formalities[12] to establishing copyright, others recognize copyright in any completed work, without a formal registration. It is widely believed that copyrights are a must to foster cultural diversity and creativity. However, Parc argues that contrary to prevailing beliefs, imitation and copying do not restrict cultural creativity or diversity but in fact support them further. This argument has been supported by many examples such as Millet and Van Gogh, Picasso, Manet, and Monet, etc.[3] ❏ GOODS OF SERVICES ❏ Credit (from Latin credit, “(he/she/it) believes”) is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[12] In other words, credit is a method of making reciprocity formal, legally enforceable, and exCBS All Accesssible to a large group of unrelated people. The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[12] Credit is exCBS All Accessded by a creditor, also known as a lender, to a debtor, also known as a borrower. ‘Star Trek: Discovery’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ CBS All Accessween China, U.S. CBS All Access’s live-action “Star Trek: Discovery” was supposed to be a huge win for under-represented groups in Hollywood. The $12 million-budgeted film is among the most expensive ever directed by a woman, and it features an all-Asian cast — a first for productions of such scale. Despite well-inCBS All Accesstioned ambitions, however, the film has exposed the difficulties of representation in a world of complex geopolitics. CBS All Access primarily cast Asian rather than Asian American stars in lead roles to appeal to Chinese consumers, yet Chinese viewers rejected the movie as inauthentic and American. Then, politics ensnared the production as stars Liu Yifei, who plays Star Trek: Discovery, and Donnie Yen professed support for Hong Kong police during the brutal crackdown on protesters in 31212. Later, CBS All Access issued “special thanks” in the credits to government bodies in China’s Xinjiang region that are directly involved in perpetrating major human rights abuses against the minority Uighur population. “Star Trek: Discovery” inadverCBS All Accesstly reveals why it’s so difficult to create multicultural conCBS All Accesst with global appeal in 2020. It highlights the vast disconnect CBS All Accessween Asian Americans in Hollywood and Chinese nationals in China, as well as the exCBS All Accesst to which Hollywood fails to acknowledge the difference CBS All Accessween their aesthetics, tastes and politics. It also underscores the limits of the American conversation on representation in a global world. In conversations with seStar Trek: Discoveryl Asian-American creatives, Variety found that many feel caught CBS All Accessween fighting against underrepresentation in Hollywood and being accidentally complicit in China’s authoritarian politics, with no easy answers for how to deal with the moral questions “Star Trek: Discovery” poses. “When do we care about representation versus fundamental civil rights? This is not a simple question,” says Bing Chen, co-founder of Gold House, a collective that mobilizes the Asian American community to help diverse films, including “Star Trek: Discovery,” achieve opening weekend box office success via its #GoldOpen movement. “An impossible duality faces us. We absolutely acknowledge the terrible and unacceptable nature of what’s going on over there [in China] politically, but we also understand what’s at stake on the Star Trek: Discovery side.” The film leaves the Asian American community at “the intersection of choosing CBS All Accessween surface-level representation — faces that look like ours — versus values and other cultural nuances that don’t reflect ours,” says Lulu Wang, director of “The Farewell.” In a business in which past box office success determines what future projects are bankrolled, those with their eyes squarely on the prize of increasing opportunities for Asian Americans say they feel a responsibility to support “Star Trek: Discovery” no matter what. That support is ofCBS All Access very personal amid the Star Trek: Discovery’s close-knit community of Asian Americans, where people don’t want to tear down the hard work of peers and Star Trek: Discovery. Others say they wouldn’t have given CBS All Access their $3 if they’d known about the controversial end credits. “‘Star Trek: Discovery’ is actually the first film where the Asian American community is really split,” says sociologist Nancy Wang Yuen, who examines racism in Hollywood. “For people who are more global and consume more global news, maybe they’re thinking, ‘We shouldn’t sell our soul in order to get affirmation from Hollywood.’ But we have this scarcity mentality. “I felt like I couldn’t completely lambast ‘Star Trek: Discovery’ because I personally felt solidarity with the Asian American actors,” Yuen continues. “I wanted to see them do well. But at what cost?” This scarcity mentality is particularly acute for Asian American actors, who find roles few and far CBS All Accessween. Lulu Wang notes that many “have built their career on a film like ‘Star Trek: Discovery’ and other crossovers, because they might not speak the native language — Japanese, Chinese, Korean or Hindi — to actually do a role overseas, but there’s no role being writCBS All Access for them in America.” Certainly, the actors in “Star Trek: Discovery,” who have seen major career breakthroughs tainted by the film’s political backlash, feel this acutely. “You have to understand the tough position that we are in here as the cast, and that CBS All Access is in too,” says actor Chen Tang, who plays Star Trek: Discovery’s army buddy Yao. There’s not much he can do except keep trying to nail the roles he lands in hopes of paving the way for others. “The more I can do great work, the more likely there’s going to be somebody like me [for kids to look at and say], ‘Maybe someday that could be me.’” Part of the problem is that what’s happening in China feels very distant to Americans. “The Chinese-speaking market is impenetrable to people in the West; they don’t know what’s going on or what those people are saying,” says Daniel York Loh of British East Asians and South East Asians in Theatre and Screen (BEATS), a U.K. nonprofit seeking greater on-screen Asian representation. York Loh offers a provocative comparison to illustrate the West’s milquetoast reaction to “Star Trek: Discovery” principal Liu’s pro-police comments. “The equivalent would be, say, someone like Emma Roberts going, ‘Yeah, the cops in Portland should beat those protesters.’ That would be huge — there’d be no getting around that.” Some of the disconnect is understandable: With information overload at home, it’s hard to muster the energy to care about faraway problems. But part of it is a broader failure to grasp the real lack of overlap CBS All Accessween issues that matter to the mainland’s majority Han Chinese versus minority Chinese Americans. They may look similar, but they have been shaped in diametrically different political and social contexts. “China’s nationalist pride is very different from the Asian American pride, which is one of overcoming racism and inequality. It’s hard for Chinese to relate to that,” Yuen says. Beijing-born Wang points out she ofCBS All Access has more in common with first-generation Muslim Americans, Jamaican Americans or other immigrants than with Chinese nationals who’ve always lived in China and never left. If the “Star Trek: Discovery” debacle has taught us anything, in a world where we’re still too quick to equate “American” with “white,” it’s that “we definitely have to separate out the Asian American perspective from the Asian one,” says Wang. “We have to separate race, nationality and culture. We have to talk about these things separately. True representation is about capturing specificities.” She ran up against the Star Trek: Discovery’s inability to make these distinctions while creating “The Farewell.” Americans felt it was a Chinese film because of its subtitles, Chinese cast and location, while Chinese producers considered it an American film because it wasn’t fully Chinese. The endeavor to simply tell a personal family story became a “political fight to claim a space that doesn’t yet exist.” In the search for authentic storytelling, “the key is to lean into the in-CBS All Accessweenness,” she said. “More and more, people won’t fit into these neat boxes, so in-CBS All Accessweenness is exactly what we need.” However, it may prove harder for Chinese Americans to carve out a space for their “in-CBS All Accessweenness” than for other minority groups, given China’s growing economic clout. Notes author and writer-producer Charles Yu, whose latest novel about Asian representation in Hollywood, “Interior Chinatown,” is a National Book Award finalist, “As Asian Americans continue on what I feel is a little bit of an island over here, the world is changing over in Asia; in some ways the center of gravity is shifting over there and away from here, economically and culturally.” With the Chinese film market set to surpass the US as the world’s largest this year, the question thus arises: “Will the cumulative impact of Asian American audiences be such a small drop in the bucket compared to the China market that it’ll just be overwhelmed, in terms of what gets made or financed?” As with “Star Trek: Discovery,” more parochial, American conversations on race will inevitably run up against other global issues as U.S. studios continue to target China. Some say Asian American creators should be prepared to meet Star Trek: Discovery by broadening their outlook. “Most people in this Star Trek: Discovery think, ‘I’d love for there to be Hollywood-China co-productions if it meant a job for me. I believe in free speech, and censorship is terrible, but it’s not my battle. I just want to get my pilot sold,’” says actor-producer Brian Yang (“Hawaii Five-0,” “Linsanity”), who’s worked for more than a decade CBS All Accessween the two countries. “But the world’s getting smaller. Streamers make shows for the world now. For anyone that works in this business, it would behoove them to study and understand Star Trek: Discoverys that are happening in and [among] other countries.” Gold House’s Chen agrees. “We need to speak even more thoughtfully and try to understand how the world does not function as it does in our zip code,” he says. “We still have so much soft power coming from the U.S. What we say matters. This is not the problem and burden any of us as Asian Americans asked for, but this is on us, unfortunately. We just have to fight harder. And every step we take, we’re going to be right and we’re going to be wrong.” ☆ ALL ABOUT THE SERIES ☆ is the trust which allows one party to provide money or resources to another party wherein the second party does not reimburse the first party immediately (thereby generating a debt), but promises either to repay or return those resources (or other materials of equal value) at a later date.[12] In other words, credit is a method of making reciprocity formal, legally enforceable, and exCBS All Accesssible to a large group of unrelated people. The resources provided may be financial (e.g. granting a loan), or they may consist of goods or services (e.g. consumer credit). Credit encompasses any form of deferred payment.[12] Credit is exCBS All Accessded by a creditor, also known as a lender, to a debtor, also known as a borrower. ‘Hausen’ Challenges Asian Americans in Hollywood to Overcome ‘Impossible Duality’ CBS All Accessween China, U.S.
https://medium.com/star-trek-discovery-s3xe12-4khd-quality/s3-e12-star-trek-discovery-series-3-episode-12-online-1080p-hd-2ba44081476d
['Naomi Briggs']
2020-12-25 22:12:55.026000+00:00
['Technology', 'Lifestyle', 'Coronavirus', 'TV Series']
406
DEFI What is decentralized finance and how does it work?
DEFI What is decentralized finance and how does it work? The DEFI or Decentralized Finance , are an alternative to traditional finance. Traditional financial services are all products related to means of payment or money management offered by a bank, a financial institution, insurance companies or investment companies, among others. These traditional finances carry with them a series of obstacles, limitations and regulations that make them little accessible for some and not very attractive for others. Such as high commissions or control by banks and regulators. Thanks to Blockchain and criptomonedas , the DEFI have come to upgrade, improve and globalize finance such obsolete, placing them at the reach of everyone no matter where you are, if you have access to the Internet anyone can use them . The traditional financial system Today’s financial system is the center of most economies. All the functionalities of this system are controlled by centralized authorities such as governments, banks and other financial institutions. People deposit their money in banks and other financial institutions to save and often deposit fixed and recurring deposits for profit. The first problem is that when someone deposits money in banks or other organizations, they have no control over their assets. You have little to no knowledge of where you invest your money and how these corporations manage it. So this system lacks transparency. These centralized institutions invest that accumulated money in the stock markets, in addition to granting loans at high interest rates and making big profits. But only a fraction of these are returned to depositors. One of the most unfair and under-addressed issues in the existing financial system is inequality in financial services. By some estimates, more than 1.7 billion people around the world do not have access to bank accounts or financial institutions of any kind. To address these drawbacks of financial systems, the world now has Decentralized Finance (DeFi). What is Decentralized Finance (DEFI)? The DEFI are a new financial ecosystem, decentralized, global, transparent, resistant to censorship, without intermediaries and easily accessible, where each user has full control of its assets. The goal of decentralized finance (DeFi) is to provide a global and open alternative to all financial services that people use today. This only requires an internet connection and a smartphone. The tools to carry out these decentralized finances are digital assets, decentralized applications (dApps), smart contracts (Smart Contracts) , protocols and decentralized exchanges (DEX). And the alternatives that this new financial ecosystem offers are decentralized loans, decentralized oracles or non-custodial token exchanges, among others. Ethereum leads decentralized finance from the beginning with Ethereum-based projects, such as MakerDAO or Augur, but little by little other large platforms in the sector such as EOS, Algorand or Tezos, among others, are preparing to claim their share of the pie. What do DEFI offer? Global access to financial services DeFi applications provide global access to financial services, although local restrictions may apply. To access DeFi platforms such as Maker or Compound Finance, users only require an internet connection and a smartphone, regardless of their location or country. Complete control over assets DeFi offers users complete control over their assets as anyone can safely store, trade, and invest their assets on the blockchain, and no middleman handles these assets except smart contracts. Therefore, users have complete control over their assets. Privacy, security and transparency With DeFi, users have custody of their wealth and can securely transact without validation from a central party. As all activities are recorded on the blockchain, all transactions are publicly available. Therefore, this offers better transparency than a centralized system, where records can be easily modified. Payments and settlements There is a vast and expensive network of intermediaries involved in cross-border payments. But these transactions take days to settle and they charge a hefty fee for their service. DeFi has the potential to completely eliminate these costly intermediaries, making remittance services much more affordable and efficient for the world’s population. DeFi promises much more than this. Censorship resistant transactions and high returns can also be achieved through DeFi. DeFi applications also provide a flexible user experience and interoperability with other products. DEFI loans Among the financial alternatives offered by DEFIs, we can highlight decentralized loans, which have much higher returns than those offered by traditional market funds, even reaching 18%, compared to 2–3% in the traditional market. Thanks to decentralized finances, you can request a loan without the need for intermediaries. Despite the scalability problems in its network, if there is a project associated from the beginning to the DEFI it is Ethereum, for example, its products with the stablecoin DAI, have been hailed as immensely interchangeable financial products. But, although Ethereum has been there from the beginning, it cannot be trusted as competitors are starting to follow suit. EOS EOS is a high-performance blockchain platform, one of its missions is to support decentralized applications that provide all the benefits that blockchain technology offers. “When we talk about DeFi, ETH failed. Ask bitfinex, bancor or any team that wants to implement a simple order book on ETH. The design of EOSIO is based on the initial decentralization lessons of the financial platform bitshares. Speed, low latency, neat indexing, floating point, and c ++ are only available in EOSIO. EOSIO has RAM, network bandwidth resource leasing, bancor, token name bidding, and various anchor token designs on the market. The decentralized future is in EOSIO «. Algorand Algorand seeks to provide the same services that Ethereum offers, only with better performance and putting an end to the common security, scalability, and decentralization issues that other blockchains have. “Our Proof-of-Stake protocol is the first of its kind to support scale, open participation, and purpose of transactions for billions of users. All backed by a sustainable business and a recognized team of experts. “ they say from their website. As for DEFIs, as announced by its Founder Silvio Micali , Algorand is working on the generation of new fungible tokens and the execution of multi-party atomic transfers (AMPT). Tezos Like its competitors, Tezos works to be a tool for the development of smart contracts and decentralized applications (dApps). “The Tezos Foundation is pleased to announce that 14 new grants have been issued for projects submitted in response to your most recent Request for Proposals (RFP). The Foundation considered proposals for the following categories as part of this RFP: Applications created using smart Tezos contracts (with special interest in finance decentralized or applications “DeFi”) “They say in the blog of the foundation tezos Of these projects, the one focused on DEFIs is Protofire , a development team focused on smart contracts. conclusion DeFi has the potential to change the current financial landscape in a positive way. But it is still a small market and has a long way to go. With new projects and protocols entering the market, the world can look forward to a true decentralized financial system for years to come.
https://medium.com/@blockchainx-tech/defi-what-is-decentralized-finance-and-how-does-it-work-79dc1fa44a38
[]
2020-11-09 15:09:51.963000+00:00
['Blockchain', 'Defi', 'Blockchain Technology', 'Blockchain Startup', 'Blockchain Development']
407
Smart chem cable industries limited
Largest Manufacturer of Co-axial, CAT5, CAT6 Cables, LED Lights, Electric Insulation Tape, BOPP Gum Tape in Bangladesh. BUET Tested, We guarantee quality products. Engineering Team Our qualified Engineers are specially trained to manufacture a broad range of wires that satisfy the most stringent requirements. We manufacture cables and wires using both standard and special materials in our 10,000 m² production space and meet every possible expectation. Management System Our high-quality guarantee is backed by an integrated management system, which is implemented throughout our global network. Product Standard State-of-the-art testing equipment and testing processes ensure our high quality standard. offers our customers high-quality products according to multiple international standards and certifications. Environmental and Energy SCCIL is an independent enterprise which develops, manufactures, and sells cables, wires, LED Lights, Tapes and accessories. We are committed to continuing to avoid and minimize negative environmental impact. Continuous Improvement We are striving for continuous improvement of the quality of products and services with regard to operational environmental protection, energy-related performance, and health and safety. Suppliers We rely on selected suppliers who work with us in a partnership and meet our standards for quality, environmental protection and energy efficiency. Central Office We’re here to help. Welcome to contact us at any time for the quick & best technical support & customer service. +880–198–076–1695 +880–167–384–1724 [email protected] [email protected] House 3, Road 2, Sec 14, Uttara, Dhaka, Bangladesh 1230 Factory SCCIL1 Visit our factory inside Dhaka. +880–177–728–6511 [email protected] 229, Hazi Abdul Awal Road, West Hazaribag, Jhauchar, Dhaka, Bangladesh Factory SCCIL2 Visit our GREEN factory in Jhenaidah BSCIC. +880–198–076–1695 [email protected] Plot 42A, 43A, BSCIC Industrial Area,Jhenaidah, Bangladesh Factory SCCIL3 Visit our GREEN factory in Jhenaidah BSCIC. +880–151–521–4096 [email protected] Plot 47A, 48A, BSCIC Industrial Area,Jhenaidah, Bangladesh Sales Center-1 Visit our sales center in Dhaka. +880–198–076–1695 [email protected] 50 Captan Bazar Complex, Bhaban 1 (underground), Nababpur, Dhaka, Bangladesh Sales Center-2 Visit our sales center in Dhaka. +880–157–535–1698 [email protected] 229, Hazi Abdul Awal Road, West Hazaribag, Jhauchar, Dhaka, Bangladesh Sales Center-3 Visit our sales center in Uttara, Dhaka. +880–177–728–6511 [email protected] House 51, Road 16, Sector 14, Uttara, Dhaka, Bangladesh 1230 Sales Center-4 Visit our sales center in Khulna city. +880–167–384–1724 [email protected] 70, Hard Metal Gallery, Jashore Road, Dakbangla, Khulna, Bangladesh Sales Center-5 Visit our sales center in Jhenaidah BSCIC. +880–171–390–6331 [email protected] Plot 42A, 43A, BSCIC Industrial Area,Jhenaidah, Bangladesh Sales Center-6 Visit our sales center in Doulotpur Khulna. +880–167–384–1724 [email protected] 810, 1st floor, Jessore road, Daulatpur, Khulna., Bangladesh 9202 Our Corporate: Electronics & Communication Rafsan Group has important role in Electronics & Communication sector in Bangladesh. We are a Cable manufacturer, retailer, supplier, distributor of CAT5/5E, CAT6/6E, Coaxial Cable 100% Copper, CSS etc. READ MORE Apparel & Textile Rafsan Group has important role in apparel and textile sector in Bangladesh. We are a apparel manufacturer, supplier, sourcing agency of raw materials, dyes, chemicals and other trimmings. READ MORE Printing & Packaging Rafsan Group has important role in Printing & Packaging sector in Bangladesh. We are a packaging material manufacturer and a wide range of printing facilities. READ MORE Sourcing & Trading Rafsan Group has important role in Sourcing & Trading in Bangladesh. We are a Sourcing & Trading Agent, supplier of raw materials, dyes, chemicals, accessories and trimmings. READ MORE Logistics & Delivery Rafsan Group has important role in Logistics & Delivery Services in Bangladesh. We are providing trasportation, global shipping, carrier, container, vehicles, truck etc. READ MORE Charity Organization Saleha Jahid Charity Organization Our Charity Organization located in Magura District, is helping rootless childs educations, foods, shelter and proper care. READ MORE
https://medium.com/@info-2403/smart-chem-cable-industries-limited-5fd8bda8d88
['Smart Chem Cable Industries Limited']
2020-12-14 20:35:08.846000+00:00
['Cat6 Cable', 'Industry', 'Optical Fiber', 'Information Technology', 'Cat5e Cable']
408
How Unsplash Went From a Tumblr Page To Fully-Fledged Platform
How Unsplash Went From a Tumblr Page To Fully-Fledged Platform And how we can apply it to our projects. Photo by Rubén García on Unsplash You’ve heard of Unsplash, no doubt. The thumbnail of this very article is integrated straight into this post from the service. I searched it within the text editor and picked one I liked. Boom, my article now has a header. It wasn’t always this way. I couldn’t always type away and select an extremely high-quality photo straight from my text editor to be used completely royalty-free. It took a lot of work in the right places, over a lot of time. But before putting in all that work to turn this into what it is today, it had to start somewhere. And starting is where most founders screw up. Starting has the most amount of friction. And taking that first step to publish a piece of work might take a year, several developers, and a lot of anxiety. Unsplash, however, published its first version in 3 hours with $38. There was no umming and ahhing over the design. There was no unnecessary complexity. The first version was so basic that a teenager posting edgy content on the internet used the same technology — it was on Tumblr. The Original Unsplash on Tumblr | GIF adapted from Source This is the story of how Unsplash started and how you can use the same principles to launch practically any idea that comes to mind.
https://sahkilic.medium.com/how-unsplash-went-from-a-tumblr-page-to-fully-fledged-platform-a65e13169e27
['Sah Kilic']
2020-11-01 08:58:10.791000+00:00
['Entrepreneurship', 'Business', 'Startup', 'Technology', 'Advice']
409
How are transactions recorded in a Blockchain?
In my last article, I have explained what Blockchain is. By taking an example of the Bitcoin network, let us understand how the transactions are recorded. Before that, let us understand the term Hash/Hashing that we used in the previous article. What hashing does is it will take your transaction as an input of any length and creates an output of fixed size (64 digits and letters). Use the website to understand it better. (Click on the text) Bitcoin network uses SHA 256 Algorithm. Let us see how transactions are recorded in the network. 1. We need a crypto wallet to make a transaction; after opening an account, load money into your wallet through the bank account and buy the bitcoin. Now your wallet will show the bitcoins that you are holding. 2.To make a transaction or to send someone BTC. We need the recipient’s public key, senders private key, BTC amount. Think of it as the recipient’s home address when you are sending a parcel (public key), and when you are making an online payment to someone, we need the password (private key). 3. Next, the system will do the Broadcasting of all the transactions in the network, and validators validate the transactions on two parameters. a) Sender has the desired number of BTC in his wallet. b) Private key is valid. Transactions are added to the memory pool once they are verified. 4. From the memory pool, miners will pick up the transaction to add to the block. (higher the transaction fees — higher the chances of transaction added in the block faster) 5.Miner will propose the next block, and other nodes will verify the same. (blocks are generated using proof of work consensus mechanism — will discuss in next article) 6. A New block is added to the existing Blockchain. 7. The transaction is complete.
https://medium.com/@shreyanshvyas35/how-are-transactions-recorded-in-a-blockchain-ba4100dd2a6b
[]
2021-06-17 09:56:05.571000+00:00
['Blockchain Technology', 'Bitcoin', 'Bitcoin Wallet', 'Cryptocurrency', 'Blockchain']
410
Deleting Your Data From the Internet
The Background From time to time, we’re told that we should Google ourselves. This is even more true before a job interview or any other important meeting. Why? Simple: it’s best to know what’s publicly available about us on the Internet. If there’s any information that’s embarrassing or damning out there, it’s best to know about it before others do, so we have the chance to edit or delete it. Even more important: we should ensure that our contact information isn’t easily obtained online. I don’t want anyone in the world to have easy access to my current home address and phone number, my previous addresses and phone numbers, my auto & real estate loans, or members of my immediate and extended family. Unfortunately, all of that information is free, publicly available, and easily accessible to anyone who searches for it. “Data broker” websites provide these kinds of data to the public. In fact, here’s a link to one example which includes scores of individuals who happen to share my name: https://www.spokeo.com/David-Koff Spokeo.com is easy to search and provides a TON of personal information about every David Koff listed, including age, gender, relatives, and — get this — a map to a recent address. The cost to access this amount of data? Free. #TotallyCreepy For an additional fee — of just $0.95!! — phone numbers, email addresses, marital status, and court records can be provided. If you think I’m joking, just check out this image: By the way, if you’re reading this and you just happen to be David Brian Koff of Denver, Colorado: I offer my most sincere apologies. All David Koffs should stick together, so keep reading: I’ll explain how to delete your data! :) The Latest Spokeo, of course, isn’t the only data broker. Not by a long shot. Just have a look at this list. Or this one. Or this one. There are hundreds of data brokers that are publicly available, easy-to-find, and super easy-to-use. They specialize in collecting data about you and me which they can then either give away or sell. I don’t know about you, but I don’t want my personal data to be so publicly available for so many people to find. Or buy. Or use in ways that I can’t control. That’s why I deleted my information from Spokeo. And that’s why I then took the extra step of scrubbing my data from every other data broker I could find. How I did that is a short, two-chapter story: Chapter One: I invested about 15–20 hours of my time over a week and went to every data broker website on this list and followed the instructions for how to scrub my info. 15–20 hours is a lot of time for me, especially with a child at home. Chapter Two: I managed the email confirmations from each of the various web brokers and went back to each site a second time to confirm that my data was, indeed, wiped. That’s it. The peace-of-mind I have now is worth far more than the time I invested. Now, when I Google myself or visit any of the largest data brokers on the planet, my personal data isn’t shown. Now look: I’m no dope. I’m sure that SOME of my data is still available — somewhere out there on the Internet, available to law enforcement agencies should they need it or to the most sophisticated hackers — but, by and large, I’ve made finding out info about me far more difficult. And that was my goal. Even better, as a result of my time investment, I now also get little to no junk mail through the US Mail. That includes no catalogs, brochures, credit card solicitations, and more. So, yeh, I’d have to say that my privacy — and the amount of paper that gets wasted on me — is now #MuchBetter The Method As I mentioned above, scrubbing your data from the Internet only requires an investment of your time to do some manual work: going to websites, filling out forms, and then waiting for responses. Some websites can scrub your data within 24 hours. Others take a few days to a week. All of the legit businesses should notify you if you provide a valid email address. Unfortunately, all of that information is free, publicly available, and easily accessible to anyone who searches for it. If you don’t have the time (or desire) to do this yourself: stay tuned. I’ll share another solution for you folks in the section below called “The Alternative Solution”. To Begin: Start with this list. The author has clear instructions with (mostly) accurate links, and back-up phone numbers in case a call is necessary to the data broker. Be sure to opt-out at the Direct Marketing Association because that stops your data from being shared with others who wish to market and mail things to you. That means reducing junk mail substantially. When you’re done with that list, I’d tackle this list next. Good To Know: Most data brokers allow you to scrub your data from their website for free; a few — cuz they suck — charge a fee. Most data brokers have an automated system on their websites that allow you to make a simple removal request; a few — cuz they suck — ask you to provide record IDs, links, or other specifics from their websites to honor your request. Most data brokers don’t ask you to prove who you are when you request that they scrub your data; a few — cuz they suck — will ask for tangible proof, like a driver’s license. If You Need It: Some data brokers may ask you to provide a short, written request to authorize your opt-out. Here is the stock letter which I use. Feel free to copy it if you like: Thank you in advance for removing all of my personal information from this and any other subsidiary website of yours. I do not authorize any of my personal data to appear on any website of yours for any reason without my express written consent in advance. This includes, but is not limited to: my names, mailing addresses, phone numbers, email addresses, cities, states, countries, possible or actual family members or contact information of any kind. Thank you in advance for your help! Here is the data which I’d like removed: Then, just list any pertinent information, links, or record IDs that you’d like purged from the broker's website(s). Total Time Commitment: I’d budget 15–20 hours of time to do this work. If you don’t have the time (or desire) to do this yourself: scroll down to the section below called “The Alternative Solution”. The Confirmation Here’s a typical confirmation email which, in this case, I received from a data broker called PeopleByName. As you can see, in PeopleByName’s case, I had to submit any “RecordID” numbers associated with my name and information on their website: As my longtime readers know — and as you can see from the screengrab above — I never EVER provide my personal email address to anyone. That rule goes doubly when contacting a data broker. Instead, I use two email services: 33mail (which I’ve known about for years) and Erine.email (which I discovered thanks to one of my newsletter subscribers!). Using these services accomplishes three goals: They provide me with an infinite number of fully customizable emails All emails get auto-forwarded to my personal email so I don’t have to go to another website to retrieve them. I’m able to block any of my custom email alias at the press of a button. In fact, just check out the image above: at the top of the email header in a green box, you’ll note how 33mail mail provides a link — in every email!!! — to block any further correspondence from any of the aliases I create. Nice! The Alternative Solution While all of us would probably like to see our data scrubbed from the largest data brokers on the Internet, not all of us have the time to do the work involved. If that’s the case, the alternative solution is to pay a reasonable fee to let others do the work for you. After I learned how to successfully remove my personal data from data brokers, I began offering the same service to family and friends. I offer my services to the general public for a fair price of $350, but I’ll tell you upfront that you can absolutely find companies to do this work for cheaper. In fact, here’s a link to a popular service called DeleteMe that charges between $129 to $350/year depending on the level of service you purchase. I give you this information up front because — if you’re on a budget — DeleteMe is a reasonable and smart option. I charge more, but, I do everything by hand, the old-fashioned way. And I also handle your physical junk mail services, something DeleteMe doesn’t do. To learn more about the services that I offer, click here. The Takeaway You’ve got options if you want to scrub a ton of your personal data from the web. If you invest 15–20 hours of your time, you do the work yourself using the guides here, here, and here. Alternatively, you can pay a reasonable fee for others to do the work for you. Regardless of how you choose, just remember these four things… If you’re concerned about privacy and/or security, you should do whatever you can to reduce your personal data being available online. If you pay for others to remove your data for you, that’s fine: just be sure you can trust the individual or company to whom you’ll provide some of your personal data. There’s no known way to remove ALL of your personal data from the web. No doubt there are many local, state, and federal agencies that have data on every one of us. No doubt those data are available via the web to the most malicious hackers. The goal is never to remove ALL of your personal data from the web: it’s to remove as much of it as possible. And that goal, dear friends, is 100% doable. Go for it. And that’s a wrap for today’s installment, everyone. Thank you again, for reading and for being a subscriber. Let me know your thoughts & questions in the comments section. As always… surf safe.
https://medium.com/swlh/deleting-your-data-from-the-internet-6d2cf0ac9e9b
['David Koff']
2020-07-05 22:33:11.695000+00:00
['Technology', 'Tech', 'Data', 'Security', 'Privacy']
411
Addressing Financial Industry’s Biggest Problems with Blockchain
Financial trading today is rife with complexities and inefficiencies, many of which surfaced over time as new intermediaries were added to accommodate the demands of a globalized economy. One of the lengthiest processes involved in securities trading is clearing & settlement. Clearing is the procedure by which the differences between orders are reconciled and a match is made between buyer and seller. Settlement refers to the date when the transaction is completed between buyer and seller. In traditional trading systems, clearing & settlement is executed by a host of intermediaries and involves manual data tracking and verification, all of which adds complications and slows down the entire process. These issues are exacerbated in asset trading, with average settlement taking 6–8 weeks to complete. Blockchain technology is well-suited to address these issues because of its capability for enabling secure and transparent data tracking, as well as the decentralized, network-based verification of transactions. Why are Existing Systems So Cumbersome? In the early days of the stock exchange, trades would actually occur directly between people on the floor; if a buyer and seller agreed on the price of an order, they completed the transaction right then and there. This is an example of what we now call a peer-to-peer exchange, or an exchange that takes place between two entities without involvement from a third party intermediary. As financial markets grew and the economy became more globalized, trading in a peer-to-peer manner became nearly impossible, but as more assets and participating entities joined the global financial system, more third parties were required to facilitate clearing & settlement as well as record and verify transactions. It’s not profound to suggest that the more layers that are added to a system, the more opportunities there will be for inefficiencies and error. It’s estimated that these errors cost the securities industry $80B annually. A “security” represents “partial ownership” of an asset without actually possessing the asset. There are various types of securities, including equities, bonds, and cash. Alternative assets are those which exist outside of these three common types and include hedge funds, mutual funds, artwork, real estate, and cryptocurrency assets, to name a few. Investment in alternative assets is no less lucrative than more traditional assets, but alternative assets are known to be less liquid (liquidity refers to how easy it is to buy and sell on the market at fair prices). Despite the issue of liquidity, the alternative asset industry totaled $7.7 trillion as of 2017 and interest continues to grow. Why Are Asset Trading Systems Even More Inefficient? Unlike traditional securities, the trading protocols and systems for alternative assets, are even less defined, thereby contributing to greater inefficiencies. The lack of standardization across assets often means that investors cannot manage their portfolio in one location and instead must navigate multiple platforms and service providers to execute and track trades. Blockchain, with its secure data tracking capabilities, makes for an obvious choice in terms of a technology to be used in improving these systems. Governor Lael Brainard recently spoke about the potential for blockchain technology to address the greatest inefficiencies in the financial industry in a speech sponsored by the Federal Reserve Bank of San Francisco. A Blockchain-based Solution Here at OpenFinance Network, we are harnessing blockchain technology to enable increased operational efficiency, especially in clearing & settlement, and also improve transparency, reporting, and security for alternative asset trading. Having been a trusted player in the alternative asset industry since 2014, we are well-poised to lead development in solutions to address the inefficiencies that plague asset trading. By replacing the need for centralized clearinghouses and manual record keeping, trades conducted on the OFN marketplace are verified in real-time by a decentralized network. Our platform, launched in July, introduces an “open source” version of our internal clearing & settlement process applied to both traditional and digital alternative assets. The decentralized securities depository enables more efficient and secure transfer of data between intermediaries. Blockchain-based tracking of transactions allows for greater transparency and easier verification. We’re able to ensure compliance with regulatory guidelines by storing sensitive private data on a secure sidechain and utilizing a zero-knowledge proving system to provide an auditable record to the public. Learn more about our underlying technology in our white paper: https://www.openfinance.io/public/whitepaper.pdf Governor Lael Brainard said in his speech that he, “[remains] optimistic that the financial sector will find valuable ways to employ distributed ledger technology in the area of payments, clearing, and settlement in coming years.” We’re not waiting for the years to come to develop solutions — you can go to our website to join our network today: https://www.openfinance.io/trade.html As always, be sure to keep up with our developments on social media: Website: openfinance.io Telegram: t.me/openfinancenetwork Twitter: twitter.com/OpenFinanceIO Join the OpenFinance Network today: www.openfinance.io/ ### Juan M. Hernandez is the Founder and CEO of OpenFinance Network, the trading platform for security tokens and other alternative assets. Juan is a serial entrepreneur, technologist, and polymath experienced in financial markets, exchanges, and blockchain technology. He holds a CS degree from Northwestern University and an MBA from the Kellogg Graduate School of Management. If you enjoyed this post, please “clap” 50X in the bottom left corner so it will be shared with more people.
https://medium.com/openfinance/addressing-financial-industrys-biggest-problems-with-blockchain-39b5a99e5208
['Juan M. Hernandez']
2018-08-29 14:58:56.606000+00:00
['Ethereum', 'Cryptocurrency', 'Tokenization', 'Blockchain Technology', 'Blockchain']
412
Artificial Intelligence Research: The Octopus Algorithm for Generating Goal-Directed Heuristic Feedback
Originally published on July 1, 2018 at Blogger; Revised on January 24, 2019. During the 2010 FIFA World Cup eight years ago, a common octopus named Paul the Octopus drew worldwide attention because it “accurately predicted” all the results of the most important soccer matches in the world (sadly it died by natural courses shortly after that). Perhaps Paul the Octopus just got extraordinarily lucky. Eight years later, as reported by the MIT Technology Review, artificial intelligence has been used in its stead to predict the World Cup (which I doubt would achieve the 100% success rate as the famous octopus did marvelously). While my research on artificial intelligence (AI) has nothing to do with predicting which team would win the World Cup, octopuses have become one of my inspirations in the past few days. My work is about developing AI techniques that support learning and teaching through solving vastly open-ended problems such as scientific inquiry and engineering design. One of the greatest challenges in such problem-solving tasks is about how to automatically assess student work so that we can automatically generate instructional feedback. Typically, the purpose of this kind of feedback, called formative feedback, is to gradually direct students to some kind of goals, for example, to achieve the most energy-efficient design of a building that meets all the specs. Formative feedback is critically important to ensuring the success of project-based learning, a common pedagogy for teaching and practicing scientific inquiry and engineering design. Based on my own experience, however, many students have great difficulties making progress towards the goal in the short amount of time typically available in the classroom. Given the time constraints, they need help on an ongoing basis. But it is unrealistic to expect the teacher to simultaneously monitor a few dozen students while they are working on their own projects and provide timely feedback to each and every one of them at the same time. This is where AI can help. This is the reason why we are developing new pedagogical principles and instructional strategies, hoping to harness the power of AI to spur students to think more deeply, explore more widely, and even design more creatively. Although this general idea of using AI in education makes sense, developing reliable algorithms that can automatically guide students to solve massively open-ended problems such as engineering design is by no means a small job. Through three months of intense work in this field, I have developed genetic algorithms that can be used to find optimal solutions in complex design environments such as the Energy3D CAD software, which you can find in earlier articles published through my blog. These algorithms were proven to be effective for optimizing certain engineering problems, but to call them AI, we will need to identify what kind of instructional intelligence of humans that they are able to augment or replace. In my current point of view, an apparent class of AI applications is about mimicking certain teaching capacities of peers and instructors. In order to create an artificial peer or even an artificial instructor, we would have to figure out algorithms that simulate the interactions between a student and a peer or between a student and an instructor, in particular those related to heuristics — one of three keys to the mind according to German psychologist Gerd Gigerenzer. In teaching, heuristics generally represent scaffolding methods for inspiring and guiding students to discover knowledge step by step on their own. In computer science, genetic algorithms are also called metaheuristic algorithms, variations of heuristic algorithms. Heuristics generally represent computational methods for searching and optimizing solutions step by step based on updated data. The similarity between heuristics in teaching and heuristics in computing is that the answer is not given right away— only a suggestion for understanding or solving a problem based on currently available information is provided. The difference is that, in the case of teaching, we withhold the answer because we want students to think hard to discover it by themselves, whereas in the case of computing, we don’t really know what the optimal answer may be. Fig. 1: An illustration of the Octopus Algorithm Despite the commonality, there are many things to consider when we try to use computational heuristics to build educational heuristics. In teaching, an optimization algorithm that yields the best solution in a long single run is not very useful as it doesn’t provide sufficient opportunities for engaging students. You can imagine that type of algorithm as someone who does something very fast but doesn’t pause to explain to the learner how he or she does the job. To create a developmentally appropriate tool, we will need to slow down the process a bit — sort of like the creeping of an octopus — so that the learner can have a chance to observe, reflect, internalize, and catch up on their own when AI is solving the problem step by step (“give some help, but not too much” — as an experienced instructor would act in heuristic teaching). This kind of algorithm is known as local search, a technique for finding an optimal solution in the vicinity of a starting point that represents the learner’s current state (as opposed to global search that casts a wide net across the entire solution space, representing equally all possibilities regardless of the learner’s current state). Random optimization is one of the local search methods proposed in 1965, which stochastically generates a set of candidate solutions distributed around the initial solution in accordance with the normal distribution. The graphical representation of a normal distribution is a bell curve that somewhat resembles the shape of an octopus (Figure 1). When using a genetic algorithm to implement the local search, the two red edge areas in Figure 1 can be imagined as the “tentacles” for the “octopus” to sense “food” (optima), while the green bulk area in the middle can be imagined as the “body” for it to “digest the catches” (i.e., to concentrate on local search). Once an optimum is “felt” (i.e., one or more solution points close to the optimum is included in the randomly generated population of the genetic algorithm), the “octopus” will move towards it (i.e., the best solution from the population will converge to the optimum) as driven by the genetic algorithm. The length of the “tentacles,” characterized by the standard deviation of the normal distribution, dictates the pace in which the algorithm will find an optimum. The smaller the standard deviation, the slower the algorithm will locate an optimum. Fig. 2: Learning through a human-AI partnership I call this particular combination of random optimization and genetic algorithm the Octopus Algorithm as it intuitively mimics how an octopus hunts on the sea floor (and, in part, to honor Paul the Octopus and to celebrate the 2018 World Cup Tournament). With a controlled drift speed, the Octopus Algorithm can be applied to incrementally correct the learner’s work in a way that goes back and forth between the human and the machine, making it possible for us to devise a learning strategy based on human-machine collaboration as illustrated in Figure 2. Another way to look at this human-machine relationship is that it can be used to turn a design process into some kind of gaming (e.g., chess or Go), which challenges students to compete against a computer towards an agreed goal but with an unpredictable outcome (either the computer wins or the human wins). It is our hope that AI would ultimately serve as a tool to train students to design effectively just like what it has already done for training chess or Go players. Fig. 3: Finding an optimal tilt angle for a row of solar panels How does the Octopus Algorithm work, I hear you are curious? I have tested it with some simple test functions such as certain sinusoidal functions (e.g., |sin(nx)|) and found that it worked for those test cases. But since I have the Energy3D platform, I can readily test my algorithms with real-world problems instead of some toy problems. As the first real-world example, let’s check how it finds the optimal tilt angle of a single row of solar panels for a selected day at a given location (we can do it for the entire year, but it takes much longer to run the simulation with not much more to add in terms of testing the algorithm), as shown in Figure 3. Let’s assume that the initial guess for the tilt angle is zero degree (if you have no idea which way and how much the solar panels should be tilted, you may just lay them flat as a reasonable starting point). Figure 4 shows the results of four consecutive runs. The graphs in the left column show the normal distributions around the initial guess and the best emerged after each round (which was used as the initial guess for the next round). The graphs in the right column show the final distribution of the population at the end of each round. The first and second runs show that the “octopus” gradually drifted left. At the end of the third run, it had converged to the final solution. It just stayed there at the end of the fourth run. Fig. 4: Evolution of population in the Octopus Algorithm When there are multiple optima in the solution space (a problem known as multimodal optimization), it may be appropriate to expect that AI would guide students to the nearest optimum. This may also be a recommendation by learning theories such as the Zone of Proximal Development introduced by Russian psychologist Lev Vygotsky. If a student is working in a certain area of the design space, guiding him or her to find the best option within that niche seems to be the most logical instructional strategy. With a conventional genetic algorithm that performs global search with uniform initial selection across the solution space, there is simply no guarantee that the suggested solution would take the student’s current solution into consideration, even though his/her current solution can be included as part of the first generation (which, by the way, may be quickly discarded if the solution turns out to be a bad one). The Octopus Algorithm, on the other hand, respects the student’s current state and tries to walk him/her through the process stepwisely. In theory, it is a better technique to support personalized learning, the number one in the 14 grand challenges for engineering in the 21st century posed by the National Academy of Engineering of the United States. Fig. 5: Finding an orientation of a house that results in best energy saving. Let’s see how the Octopus Algorithm finds multiple optima. Again, I have tested the algorithm with simple sinusoidal functions and found that it worked in those test cases. But I want to use a real-world example from Energy3D to illustrate my points. This example is concerned with determining the optimal orientation of a house, given that everything else has been fixed (Figure 5). The orientation will affect the energy use of the house because it will receive different amounts of solar radiation through the windows at different orientations. Fig. 6: Using four “octopuses” to locate four optimal orientations for the energy efficiency of a house. By manual search, I found that there are basically four different orientations that could result in comparable energy efficiency, as depicted in Figure 6. Fig. 7: Locating the nearest optimum Now let’s pick four different initial guesses and see which optimum each “octopus” finds. Figure 7 shows the results. The graphs in the left column show the normal distributions around the four initial guesses. The graphs in the right column show the final solutions to which the Octopus Algorithm converged. In this test case, the algorithm succeeded in ensuring nearest guidance within the zone of proximal development. Why is this important? Imagine if the student is experimenting with a southwest orientation but hasn’t quite figured out the optimal angle. An algorithm that suggests that he or she should abandon the current line of thinking and consider another orientation (say, southeast) could misguide the student and is therefore unacceptable. Once the student arrives at an optimal solution nearby, it may be desirable to prompt him/her to explore alternative solutions by choosing a different area to focus and repeat this process as needed. The ability for the algorithm to detect the three other optimal solutions simultaneously, known as multi-niche optimization, would be helpful but may not be essential in this case. Fig. 8: “A fat octopus” vs. “a slim octopus.” There is a practical problem, though. When we generate the normal distribution of solution points around the initial guess, we have to specify the standard deviation that represents the reach of the “tentacles” (Figure 8). As illustrated by Figure 9, the larger the standard deviation (“a fatter octopus”), the more likely the algorithm will find more than one optima and may lose the nearest one as a result. In most cases, finding a solution that is close enough may be good enough in terms of guidance. But if this weakness becomes an issue, we can always reduce the standard deviation to search the neighborhood more carefully. The downside is that it will slow down the optimization process, though. Fig. 9. A “fatter octopus” may be problematic. In summary, the Octopus Algorithm that I have invented seems to be able to accurately guide a designer to the nearest optimal solution in an engineering design process. Unlike Paul the Octopus that relied on supernatural forces (or did it?), the Octopus Algorithm is an AI technique that we create, control, and leverage. On a separate note, since some genetic algorithms also employ tournament selection like the World Cup, perhaps Paul the Octopus was thinking like a genetic algorithm (joke)? For the computer scientists who happen to be reading this article, it may also add a new method for multi-niche optimization besides fitness sharing and probabilistic crowding.
https://charlesxie.medium.com/artificial-intelligence-research-the-octopus-algorithm-for-generating-goal-directed-feedback-116b87cff2a5
['Charles Xie']
2019-01-24 22:14:24.022000+00:00
['Genetic Algorithm', 'Artficial Intelligence', 'Engineering', 'Design', 'Educational Technology']
413
Implementing Payments in Your Web or Mobile App as a Startup in India
Choose wisely The most fundamental factor in the life-cycle of a start-up: Money. 💰 Not just in the general scheme of fund-raising or making profits but especially if your company relies on online payments for your product to sell. For eg. An event ticketing portal, selling pickle online etc. Having a smooth payment interface is a basic necessity in such cases. Think of it, you’ve spent all your money developing a killer product that your potential customer gets really excited about buying only to turn them off at the last step: Paying for it. 😿 So let’s dig into how you can effectively implement digital payments. Here’s what you should be thinking about. Universal Instrument Support Broadly ensure that your payment gateway supports all common existing methods such as UPI, Net Banking, Debit Cards, Credit Cards (including International) as well as a majority of Indian wallets, namely Paytm, PhonePe, Ola Money, MobiKwik and Freecharge. Well, maybe not all. Two important things to especially ensure: Your gateway supports UPI, and has direct connectivity to Google Pay . This is because UPI has a large share of online payments in India due to it’s easy usability and these two are the kings of the UPI market in India. 👑 . This is because UPI has a large share of online payments in India due to it’s easy usability and these two are the kings of the UPI market in India. 👑 Your gateway supports wallets (especially Paytm) since it’s become an important part of the Indian Payment Ecosystem. In case your target audience is those from Tier 2–3–4 cities and beyond look for direct Paytm support since these audiences love using Paytm. Costs and Transaction Fees Now that you’ve narrowed down your choices on the basis of instrument support you must filter with respect to your budget. Categorising by fee plans there are basically two types of gateways: Those that charge a set-up fee + a transaction fee. Those that charge only a transaction fee. Little costs add up at scale! Start-ups should prefer the latter option as it’s pointless to invest in a high one-time set up/maintenance fee due to the uncertainties of usage and scale. Most gateways charge you a percentage of the transaction (1%–4%, depending on the instrument) and / or a fixed fee per transaction. I’ve collected links to the pricing of a trusted payment gateways in India : Features That Matter! You should look out for features such as Payment Links Subscription / Recurring Payments Easy Payouts & Refunds Settlements for multiple vendors The key feature that you as a startup need to have in place is Good Customer Support. Said no one ever To avoid major losses in terms of money as well as reputation you need excellent customer support to prevent sudden issues from turning into disputes. Users hate pending payments and often dispute such transactions if there is no immediate resolution. 😰 This is what happens if they do that:- When your customer disputes a transaction, the customer’s bank allows them to file a complaint and demand for a refund from you (the merchant). Your bank will debit the disputed funds from your account and place these funds on hold, pending the outcome of the case. Your customer’s bank will then initiate an investigation into the transaction and if the dispute is found to be legitimate, the held funds will be credited to your customer’s account. If the customer’s claims are discovered to be without merit, the held funds will be reversed to the merchant’s account.The process of filing a dispute is called a chargeback. You can imagine how damaging chargebacks can be not just in terms of time and money but also your startup’s reputation, hence the emphasis on good customer support. Update: Do check for Gateway’s Merchant Bank and make sure it is backed by a trusted Banking Partner to avoid any loss of users and business. Recently, A major Payment App in India (I don’t want to name it) was down for 24 hours because Reserve Bank Of India had placed their Merchant Bank in a Moratorium period. I hope you’ve gained some clarity from this article and are now better equipped to choose your ideal gateway. I will be covering new developments in the Indian payments space (LazyPay!) in upcoming posts, stay tuned! 😄 Gunish Matta [LinkedIn Facebook Github] Thanks to Mahima Samant, Sumukha Nadig, Mashayikh Shaikh for edits and reviews of the draft
https://medium.com/novasemita/implementing-payments-in-your-web-or-mobile-app-as-a-startup-69d143ec0ef0
['Gunish Matta']
2020-03-27 13:16:03.058000+00:00
['Payments', 'Startup', 'Technology', 'Full Stack Development']
414
The New Graphics Cards From Nvidia || RTX 3080Ti and RTX 3070Ti
The New Graphics Cards From Nvidia || RTX 3080Ti and RTX 3070Ti Tehnologijaviews Jun 1·2 min read Nvidia has presented at the Computex 2021 computer fair in Taipei (Taiwan) its new flagship in its catalog of high-end graphics cards for gaming, GeForce RTX 3080 Ti, 50 percent faster than the generation above, which comes together with the new RTX 3070 Ti model. Nvidia’s new RTX 30 Ti graphics cards are aimed at current video games that have raised the realism and demands for graphics cards, such as Cyberpunk 2077 and Watch Dogs: Legion, as reported by Nvidia in a statement. RTX 3080 Ti is powered by Nvidia’s Ampere graphics architecture , which improves performance with features such as ray tracing, and also includes Nvidia DLSS , Nvidia’s AI supersampling technology that improves gaming graphics and shrink functionality of latency. The company’s new flagship graphics improves rendering speed up to double on traditional tasks from the two-generation GTX 1080 Ti model, but the increase is even greater on modern aspects such as ray tracing. Nvidia has also introduced the new GeForce RTX 3070 Ti model, a graphics that improves up to 50 percent the performance of the previous generation (RTX 2070 SUPER), while the number of frames per second is double compared to the cards of two ago generations (GTX 1070 Ti). Nvidia’s new RTX 3080 Ti will be available globally from June 3 for a price of $ 1,199 (€ 980), while the RTX 3070 Ti model will be sold from June 10 for $ 599 (€ 490). Do You Know What I Have Posted on
https://medium.com/@tehnologijaviews/the-new-graphics-cards-from-nvidia-rtx-3080ti-and-rtx-3070ti-e8f29b9a8033
[]
2021-06-01 16:03:19.229000+00:00
['Technology', 'Graphics', 'Computers', 'Nvidia', 'Cards']
415
What I Learned From Completing the MIT xPro Data Science Course
What I Learned From Completing the MIT xPro Data Science Course The personal, professional, and learning platforms Screenshot by Author I am so happy I did this. 99/100, in the MIT xPro Data Science and Big Data Analytics online course, after 121 hours of work in the last few weeks. Some say, it’s good to practice retrieval as part of the human learning process. So let me summarise. What have I learned? Here is a 3-part article series about it: The personal insights, on taking up a challenge and a new learning project; The professional insights, on data science and machine learning; and The modern way of learning, on virtual learning platforms, comparing MIT xPro vs popular alternatives like Udemy. In short, my gut feeling says: In the new world of technologies and machines, it is just as important, that I keep the dancing, keep the yoga, and take the time to walk away from the screen! Hope you find the lessons shared here useful for your own learning journey.
https://medium.com/the-innovation/what-i-learned-from-completing-the-mit-xpro-data-science-course-314314fe558c
['Nicole Liu']
2020-07-03 07:01:51.476000+00:00
['Technology', 'Lifelong Learning', 'Data Science']
416
Your Entire Life Is on Gmail. It’s Time to Clean That Up.
Your Entire Life Is on Gmail. It’s Time to Clean That Up. Don’t suffer like I’ve suffered Photo: Jay Wennington/Unsplash Wednesday, 2 p.m. Eastern Time: I am on hour 9,000 of deleting emails from my overloaded Gmail account. My eyeballs are leaking out of my head and pooling in a sticky puddle on my laptop’s mouse pad. Or it feels like it, anyway. In recent weeks, Google has sent me repeated warnings that I’m approaching my Google Drive storage limit. I had rarely considered that Google would limit my storage capacity; the company’s potential for data collection seems infinite, incapable of being incapacitated or overburdened. And yet my email account, which I’ve had since 2014, is finally tired of holding my endless stream of newsletters and press releases; the 15 gigabytes that Google allots free users is nearly full. I’d hoped to write this story with a title along the lines of “One Simple Trick To Clearing Out Your Google Drive Storage.” Deleting every last email in your inbox and starting fresh would be ideal, but there’s such a wild mix of correspondence stored there — chain emails from your second cousin, newsletters from companies you bought a sweater from once, love letters from the early days of a current relationship — that the nuke-and-run method isn’t recommended or possible for anyone but those who lack even a drop of sentimentality. For everyone else, there’s no one trick to clearing out your Google Suite. There are strategies, which I will get into below, but the best thing you can do, I have unfortunately discovered, is keeping tidy as you go. And that means deleting all the emails you don’t need as you receive them so you don’t get caught up in the mess I spent the last several days cleaning up. Over the course of the past few days, I’ve been experimenting with various methods of deleting my emails and documents. It was a painful experience: Reading old emails that detailed difficult situations and brought up bitter memories was not how I would prefer to spend my workday (or my time off), and it wasn’t easy to figure out what I could justify deleting (emails with former bosses about banal topics) and what should stay (contracts, old emails from friends). My years of resistance toward deleting any of these emails — out of a sense of nostalgia or concerns that I might need them for whatever reason someday — like fall into the bucket of what researchers consider “digital hoarding.” Digital hoarding, which I’ve written about before, might sound somewhat hyperbolic — after all, it’s not like someone who compulsively splurges on Steam sales must then literally wade through piles of games on their way to the bathroom. Yet studies show digital hoarding can be stressful and even upsetting to those who experience it. In 2018, researchers interviewed 45 of these so-called digital hoarders and found that the impression that digital space is endless contributed significantly to peoples’ tendency to hoard. They were surprised at the volume of digital stuff they’d accumulated, but still struggled to come to terms with deleting much of it. Many people simply didn’t care about the pileup of documents, emails, photos, and music. “I can’t be bothered going through it all, there are too many,” said one 30-year-old participant. “I’ve left it so long now that going back and sorting through is not something I can be bothered with,” said another, age 25. For many participants, the prospect of going through and deleting their digital crap was anxiety-inducing and stressful. This certainly aligns with my own experience: When I first faced the mess that was my Google Drive on the day I took this assignment, I felt panicked and overwhelmed by the sheer magnitude of the task in front of me. Similar to the terror that might precede sitting down to scrub the baseboards with a stack of sponges, I was resistant and angry before I got started on my Google Suite cleaning. But once I got started, I kind of… couldn’t stop. Seeing that little percentage tick down as I went through my emails was immensely satisfying, like creating vacuum lines on a carpet. So, yes, there’s no real trick to this, and my guess is that this is intentional: If it’s a pain in the butt to organize and delete your emails and files, your only options are either to a) get a new email address (I legitimately considered this) or b) pay Google for more storage, which starts at $1.99 a month for 100 gigabytes and goes up to $10 a month for 2 terabytes. And the more emails you keep and accumulate, the more delicious, juicy data is available for Google to collect. What you need to do first is figure out where you’re storing most of the junk taking up the precious few 15 gigabytes Google allots you. To do this, go to One.Google.com and, on the left sidebar, click “Storage.” There, you’ll find a breakdown of how much space Google Drive, Gmail, and Google Photos are taking up, respectively. Unless you’re a photographer or a big fan of huge PDFs and weighty spreadsheets, most of the action is probably happening in your email. The best way to delete mass emails without regret — or sadness, as I found as I thumbed through those depressing messages from five years ago — is to temporarily sign up for Mailstrom.co. Mailstrom analyzes your email and, theoretically, allows for quick and easy unsubscribe and delete options. (For those worried — with good reason — about their privacy, Mailstrom says it won’t share or sell your data for advertising purposes, and it deletes your data three months after you’ve canceled your account.) But after you’ve deleted 2,500 emails through Mailstrom, your free trial ends, and you need to pay between $9 and $30 a month to continue using the service. If I were going to pay, I’d probably just buy the Google storage, which is cheaper, than a streamlined method for deletion. Instead, I recommend using Mailstrom to show you which senders are pummeling you with the most garbage: I am my second-worst offender! Once you’ve determined the worst sender offender, you can go into Gmail, search from:[email protected] (or whoever is sending you tons of email), and mass delete from there. Once you’re looking at your search results, click the “Select All” box on the left side of your inbox, and then be sure to click “Select all conversations that match this search,” like so: That way, you’ll delete all the messages sent by that particular sender, not just the most recent 50. That should get rid of a pretty decent chunk of your email clutter in a minimal amount of time, but if you want to keep going, I’d recommend a few more courses of action: Delete everything in your Promotions and Social folders. Just go nuclear on those folders. Check the past few pages to ensure nothing important lands in there, and then select all and delete. That will likely be thousands, if not more, of the remaining marketing and newsletter stragglers you didn’t catch when you used the Mailstrom method. If you only want to delete Promotions or Social emails older than a specific date, use the following search term: category:promotions , older_than:2y. (That space between “promotions” and the comma is intentional — the search won’t work without it.) I did this first and then decided, whatever, I don’t need any of that junk. Bye! You can do the same with newsletters with category:updates , older_than:2y. Search for all your emails larger than a specified size, then go through and manually delete (or be bold and “select all” — “delete”). Enter larger:10mb for this option. This takes more time, so I only recommend this if you’ve already nuked your marketing emails and still have a good chunk of space to free up. It makes more sense to sort out and delete emails larger than a specific size than attachments larger than a specific size because plenty of emails are ginormous without the additional help of an attachment. But if you want to focus on attachments, type has:attachments larger:10MB (or however big) into the search bar and go wild. You can find more helpful Gmail search terms here, though the above are the terms I found most effective for deleting swathes of email at once. Because Google considers your Trash to be part of your allotted 15 gigabytes, you need to empty it before you know how much you’ve truly managed to delete. Once you empty your trash, those emails are gone forever, but I don’t really recommend clicking through and looking at what’s in there because you’ll find yourself three hours later red-eyed and extremely bored. Just hit delete — and admire your clean baseboards/organized closet/sparkling tile or whatever household cleaning analogy would give you the most satisfaction. Now, hopefully, you are down to the point where you can begin to accumulate email again without worrying about running out of space for the next couple of years. Here’s where I stand now: It took me several days to get here, but only because I took the time to try every conceivable method in order to deliver to you, dear reader, those which I’ve found most effective. Still, you probably need to put at least an hour or two of work into this, I’m sorry to say. But you never have to do this again if you follow two very simple rules from here on out: Mark any and all press releases, marketing emails, et cetera that you don’t want to see as spam. If you want to be nice, you can email the marketing company or PR reps and respectfully ask to be removed from their mailing list, but if you’re lazy or simply receive too many of them to do this, just file them to spam. Gmail automatically clears out your spam folder after 30 days, so they’re as good as gone after a month. Do this consistently, as soon as you see them. Delete any other emails you don’t care about immediately. Don’t leave them there because maybe you’ll read them later. Don’t let them collect dust. Delete. Be ruthless, seriously. No time like the present to shed your indecisive, sentimental nature when it comes to messages from your alumni association or the rescue where you got your dog. Bye! With this, you should stay ahead of the game, with minimal effort, for the next several years at least. Once you’ve cleared out your Gmail, you don’t need to get complicated with folders and labels and such, which get cumbersome if you have too many. (I only have two: one for random thoughts I send myself in the middle of the night about my novel, and one for press releases from book publicists since I often respond to those en masse at a later date). By making the spam and delete buttons your best friends, you will avoid the disaster you have wrought upon yourself for a very, very long time.
https://debugger.medium.com/your-entire-life-is-on-gmail-its-time-to-clean-that-up-c81106e099b4
['Angela Lashbrook']
2020-10-16 05:32:59.829000+00:00
['Technology', 'Gmail', 'Productivity', 'Digital Life', 'Email']
417
Demand for Data Skills has Skyrocketed
Demand for Data Skills has Skyrocketed Top data skills you need in today’s data-driven world and how to acquire them Photo by NESA by Makers on Unsplash I. Introduction In the past decade, the demand for individuals with data skills has skyrocketed. A recent study using data collected from LinkedIn shows that most of the top tech jobs in the United States and worldwide are related to data, as shown in the figures below: Image by Benjamin O. Tayo Image by Benjamin O. Tayo The figures above show that most of the top tech jobs today’s world are related to data. As more and more companies are becoming data-driven, it is no surprise that there is a high demand for workers with data-related skills (data mining, data storage, data retrieval, data transformation and cleaning, and data analysis). We also notice that infrastructures such as Linux, Azure, and AWS that are used for large scale data science projects also feature among the top tech skills. This is because more and more companies are using cloud computing for data science and machine learning projects. The growth in the demand for skilled workers with advanced data skills can be attributed to 3 factors: The world is producing data at an unprecedented rate; as a result, data has become a new commodity with extremely high value. There is, therefore, a need for highly skilled individuals to mine, transform, and analyze the data. More and more companies are becoming data-driven. These companies are now creating teams of skilled workers that can work together to leverage the power of data for improving daily business operations or increasing sales and profits. Global competition in the leveraging of tech skills for optimizing business operations and for decision making is increasing. As a result, many companies are putting more resources into recruiting, hiring, and nurturing the right talent to remain in the global competition. If you are considering joining the data workforce, you may be wondering about what data skills you need and how to acquire them. II. Data Job Titles Some of the top data job titles that are in high demand include the following: a) Data Scientist b) Data Analyst c) Business Intelligence Analyst d) Database Developer e) Database Administrator f) Data Engineer g) Data Analytics Manager h) Big Data Software Developer i) Cloud Developer j) Cloud Software Engineer III. How to acquire data skills? 1. Massive Open Online Courses (MOOCs) The rising demand for data science practitioners has given rise to a proliferation of massive open online courses (MOOC). The most popular providers of MOOC include the following: a) edx: https://www.edx.org/ b) Coursera: https://www.coursera.org/ c) DataCamp: https://www.datacamp.com/ d) Udemy: https://www.udemy.com/ e) Udacity: https://www.udacity.com/ f) Lynda: https://www.lynda.com/ If you are going to be taking one of these courses, keep in mind that some MOOCs are 100% free, while some do require you to pay a subscription fee (it could range anywhere from $50 to $200 per course or more, varies from platforms to platforms). Keep in mind that gaining expertise in any discipline requires an enormous amount of time and energy. So do not be in a rush. Make sure that if you decide to enroll in a course, you should be ready to complete the entire course, including all assignments and homework. Some of the quizzes and homework assignments will be quite challenging. However, keep in mind that if you don’t challenge yourself, you wouldn’t be able to grow in your knowledge and skills. Having completed so many data science MOOCs myself, find below are 3 of my favorite data science specializations. (i) Professional Certificate in Data Science (HarvardX, through edX) Includes the following courses, all taught using R (you can audit courses for free or purchase a verified certificate): Data Science: R Basics; Data Science: Visualization; Data Science: Probability; Data Science: Inference and Modeling; Data Science: Productivity Tools; Data Science: Wrangling; Data Science: Linear Regression; Data Science: Machine Learning; Data Science: Capstone (ii) Analytics: Essential Tools and Methods (Georgia TechX, through edX) Includes the following courses, all taught using R, Python, and SQL (you can audit for free or purchase a verified certificate): Introduction to Analytics Modeling; Introduction to Computing for Data Analysis; Data Analytics for Business. (iii) Applied Data Science with Python Specialization (the University of Michigan, through Coursera) Includes the following courses, all taught using python (you can audit most courses for free, some require the purchase of a verified certificate): Introduction to Data Science in Python; Applied Plotting, Charting & Data Representation in Python; Applied Machine Learning in Python; Applied Text Mining in Python; Applied Social Network Analysis in Python. 2. Learning from a Textbook Learning from a textbook provides a more refined and in-depth knowledge beyond what you get from online courses. This book provides a great introduction to data science and machine learning, with code included: “Python Machine Learning”, by Sebastian Raschka. https://github.com/rasbt/python-machine-learning-book-3rd-edition The author explains fundamental concepts in machine learning in a way that is very easy to follow. Also, the code is included, so you can actually use the code provided to practice and build your own models. I have personally found this book to be very useful in my journey as a data scientist. I would recommend this book to any data science aspirant. All that you need is basic linear algebra and programming skills to be able to understand the book. There are lots of other excellent data science textbooks out there such as “Python for Data Analysis” by Wes McKinney, “Applied Predictive Modeling” by Kuhn & Johnson, “Data Mining: Practical Machine Learning Tools and Techniques” by Ian H. Witten, Eibe Frank & Mark A. Hall, and so on. 3. Medium Medium is now considered one of the fastest-growing platforms for learning about data science. If you are interested in using this platform for data science self-study, the first step would be to create a medium account. You can create a free account or a member account. With a free account, there are limitations on the number of member articles that you can access per month. A member account requires a monthly subscription fee of $5 or $50/year. Find out more about becoming a medium member from here: https://medium.com/membership. With a member account, you will have unlimited access to medium articles and publications. The 2 top data science publications on the medium are Towards Data Science and Towards AI. Every day, new articles are published on medium covering topics such as data science, machine learning, data visualization, programming, artificial intelligence, etc. Using the search tool on the medium website, you can have access to so many articles and tutorials covering a wide variety of topics in data science from basic to advanced concepts. 4. KDnuggets Website KDnuggets is a leading site on AI, Analytics, Big Data, Data Mining, Data Science, and Machine Learning. On the website, you can find important educational tools and resources in data science as well as tools for professional development: 5. GitHub GitHub contains several tutorials and projects on data science and machine learning. Besides being an excellent resource for data science education, GitHub is also an excellent platform for portfolio building. For more information on creating a data science portfolio on GitHub, please see the following article: A Data Science Portfolio is More Valuable than a Resume. 6. LinkedIn As data science is a field that is ever-evolving due to technological innovations and the development of new algorithms, one way to stay current is to join a network of data science professionals. LinkedIn is an excellent platform for networking. There are several data science groups and organizations on LinkedIn that one can join such as Towards AI, DataScienceHub, Towards data science, KDnuggets, etc. You can also follow top leaders in the field on this platform. 7. YouTube YouTube contains several educational videos and tutorials that can teach you the essential math and programming skills required in data science, as well as several data science tutorials for beginners. A simple search would generate several video tutorials and lectures. 8. Khan Academy Khan academy is also a great website for learning basic math, statistics, calculus, and linear algebra skills required in data science. IV. Summary In summary, we’ve discussed some of the top data skills that are currently in high demand. As more and more companies are becoming data-driven, the demand for workers with advanced data-related skills will continue to increase. The skills to focus on in today's demand will depend on what tech sector one is interested in. For data analysts/data scientists’ job roles, it is essential to master skills such as SQL, Python, Machine Learning, and AWS.
https://medium.com/towards-artificial-intelligence/demand-for-data-skills-has-skyrocketed-27dbff5058f2
['Benjamin Obi Tayo Ph.D.']
2020-11-23 17:06:58.745000+00:00
['Programming', 'Data Driven', 'Data', 'Machine Learning', 'Technology Skills']
418
Stateful VS Stateless architecture
Stateful VS Stateless architecture Alright so you type a URL of a website and then authenticate yourself to the server that you are who you say you are, usually with a username and a password and then you view your profile. Right? simple isn’t it? It’s the most simplest form. That’s what a user sees but underneath there is a lot more going on, which brings us today to discuss stateful and stateless applications. Let’s begin with an example. Stateful applications Consider an application which has two pages or services “Login” and “ViewProfile”, and the user Bob sends a request to the server1 to the login page which looks like this http://server1/login?username=bob&password=bob123 now the server queries the database to validate these credentials and if they are, the server responds with a Status: 200 OK response and sets a session variable or a global variable in the server. Now when the user Bob requests for the ViewProfile page the server remembers the session variable and responds with a 200 OK response. Usually there is an array or some sort of session variable that tells that this user is logged in. Advantages of Stateful applications The next time Bob makes a call to ViewProfile page the server won’t have to query the database thus storing the state and saving the trouble of querying the database twice. And maybe it will be faster at the backend. Disadvantages of Stateful applications So where does it go wrong? Remember the session variable is stored in the memory, the RAM. So if i literally do have multiple servers and now if the user Bob tries to request the ViewProfile page to server2 it will fail. Why? because server2 does not have the session variable stored because Bob logged into server1. This means we can not scale our network horizontally, and Bob has to authenticate again every time he connects to a different server. Now you might say “I don’t go to server1 or server2”, what really happens is that there is another machine called a Load Balancer, what it does is simply balances the load/traffic. Bob does not know about server1 or server2, he knows about LB. So he makes a login call to LB, and LB randomly assigns a server and let’s say it picks up server1 and we do the login call, the server queries the database and we get a 200 OK response. Now Bob calls for the ViewProfile page, if he gets lucky he might get server1 but what if the LB picks up server2? The call will fail because he is not logged into server2. Imagine logging into facebook.com and then you go to profile, and you have to login again. Stateless applications Now consider the same application with our two services “login” and “ViewProfile” with multiple servers and a Load Balancer and a database. Alice makes a call to the login page and authenticates herself using her credentials, everything is same as before, but now when the database authenticates those credentials it also sends a token (usually a pseudo-random token) to the user (in our case Alice), now with each request Alice makes, the token is send along with the request. Now Alice request for ViewProfile page, if the LB decides to take her to server1, the server1 queries the database and validates that token and if the token that Alice has is same as the one sent by the database we get an 200 OK response. On the other hand if the LB decides to take her to server2 that’s fine too. the server2 will again query the database and validate the token. It does not have to be the same database, it could be another intermediate database. Advantages of stateless applications Unlike the stateful application we can scale out network horizontally, usually used in larger companies. Disadvantages of stateless applications As stated before “with each request the user makes the token is attached with it”, so it has an extra cost of querying to the database for each request. but this type of authentication is more soft.
https://medium.com/@rudrasonkusare0222/stateful-vs-stateless-architecture-b2d3273ce85c
['Rudra Sonkusare']
2021-08-17 12:49:03.504000+00:00
['Web Development', 'Information Technology', 'Information Security', 'Infosec', 'Cybersecurity']
419
Are we testing viruses in the wrong way?
1. Polymerase Chain Reaction (PCR) PCR is probably the most commonly used method for genetics diagnostic. It rapidly makes several of copies of a specific DNA sample, to be subsequently studied. This generally means to “see a bar on a gel”, and therefore conclude the presence of a specific DNA. Briefly, if a DNA strain is present in a sample (e.g. related to a virus or known gene), it is amplified, and then seen on a gel. If it is not present the test is negative (no amplification and nothing significant on the gel). The diagnostic purpose is not the only one as the technique is also used to DNA and gene cloning, gene mutagenesis, construction of DNA-based phylogenies, etc. If you are a biologist/pathologist, you can probably skip this section. If you are new to this, prepare yourself to have a little headache, and this is why I do believe in the future we will NOT use this technique. For a general reader the required steps might appear cumbersome, and indeed they are. The third section describes a simpler protocol. PCR was invented in 1984 by the American biochemist Kary Mullis. A peculiar character — surfer and fan of LSD — who won the Nobel Prize in the 1993 for this technique [1]. Since then, it has been useful in infinite contexts. Several improvements and evolutions of this technique exist, as digital PCR (dPCR), quantitative PCR (qPCR), and asymmetric PCR. The basic idea of PCR is to perform 3 steps cyclically [2]. Those are changes of temperatures hence the method is also called thermal cycling: (1) denaturation of the DNA template into single strands; (2) annealing of primers to each original strand for new strand synthesis; and (3) extension of the new DNA strands from the primers. In the denaturation step, two strands of the DNA double helix are physically split (generally at 94–98 °C). In this way, by breaking the hydrogen bonds between complementary bases 2 single strand DNA are obtained. The annealing step while lowering the temperature to 50–65 °C allows primers (single strands of initial match) to bind to the complementary sequences of DNA. This opens to the replication of strands of DNA (elongation) by adding free dNTPs (Nucleoside triphosphate) and creating new templates for the polymerase at 72–80 °C. Polymerase is the enzyme involved in the process). Therefore, given several cycles (e.g. 30) the original DNA template is exponentially amplified. In this process four main reagents are necessary: the DNA sample, the primers (which are 2 short single strand DNA fragments known as oligonucleotides complementary sequence to the target DNA region), nDTPs and a DNA polymerase (and a machine to change quickly and stably the temperatures). Luckily, nowadays you can use mastermix which already include most elements. The most used DNA polymerase enzyme is the Taq polymerase which is heat-stable and originally isolated by the thermophilic bacterium Thermus aquaticus. If the enzyme was not heat-stable, it should be added at every cycle. Primers are 2 since 1 per side is necessary (as shown below), and they are specific to what we are testing (there are primers for malaria, primers for COVID, primers for HIV, etc). Once we know the specific primer for what we want to test we can order it via specific online tools as IDT-primer quest. The settings can be summarized in the following picture, representing on the left the ingredients and on the right the aforementioned cycles. Credits: AgaKahn Academy However, after the reaction, the “cocktail” still needs to be inserted in an agarose solution with TAE buffer contained into an electrophoresis chamber to allow visualization: This has also been the most used way to detect COVID-19. A sample from a patient is collected (e.g. from a swab inserted in the nose or throat) and then analyzed by doing a PCR. To further complicate the situation, we have to bear in mind that viruses have only RNA, so first we have to create a DNA by complementing the possible RNA. Moreover, with a variation of PCR called RT-PCR, it is possible to quantify the viral load. All those steps are also summarized in the following video: As you can imagine, given so many steps and conditions (primers sequence, temperature, etc.), many things can go wrong, and this is why often many people in the last 10 months reported negative results to this test, though they probably had the COVID19. Also, some people were told at some point to be cured, but instead they were not, and therefore if they were afterwards again sick, they believed having contracted the virus again. The estimated success rate was about 60% … yes 1 over 3 times it might not work. The main reason for this low sensitivity is believed to be related to the viral load collected by the swabs. Namely, that it is difficult to collect at the right moment viruses on the throat or nasal cavity by using swabs [3]. Also, the required time is about 2 hours, though in clinical settings, results to patients were generally given within 24–48 hours. 2 hours might seem little, but in large scale situations as those we faced, it is a huge amount of time.
https://medium.com/predict/are-we-testing-viruses-in-the-wrong-way-a91279ba5e85
['Dr. Alessandro Crimi']
2020-08-21 08:53:48.929000+00:00
['Future Technology', 'Biology', 'Covid 19', 'Medicine', 'Health']
420
Camp IHC —Summer Camp Videos
A sleep away summer camp where the stories told are more magical than you could ever imagine. Follow
https://medium.com/campihc/camp-ihc-video-of-the-week-a4e715db85cb
['Camp Ihc']
2020-12-02 15:40:05.883000+00:00
['Children', 'Summer Camp', 'Technology', 'YouTube', 'Videos']
421
[GEEKY] Mac Sound Converter Utility
Apparently (not sure why yet) on Android, sound effects files using the OGG format work better than those using wav or mp3. It might be an AndEngine thing, I am not sure. Anyway, here’s my result of half hour googling and experimenting for a reliable way to convert the files. I tried a bunch of things, and I finally settled on MediaHuman AudioConverter. Use it. It really works and is nice and simple.
https://medium.com/pito-s-blog/geeky-mac-sound-converter-utility-e91cd2089105
['Pito Salas']
2017-06-08 19:19:58.601000+00:00
['Technology', 'MP3', 'Programming', 'Wav', 'Ogg']
422
The Basics of Data Privacy
What is Data Privacy? A lot of talk in the news in recent times has been on the subject of data privacy. Most people have heard of the Facebook-Cambridge Analytica Scandal and Google+ Bug, but the actual definition of data privacy is something that escapes a lot of us. In today’s tech-savvy society, where personal information is increasingly being stored as digital data, it is important to understand how one’s personal data can be used by others, and also how to safeguard against the misuse of personal information. In the article Data Privacy Concerns: An Overview for 2019, Rhonda Bradley, writing for The Manifest, lays out what constitutes as personal data and how it can be misused by both big companies and cyber-criminals. In short, personal data is personal information that can be used to identify a specific individual. This includes, but is not limited to one’s email address, phone number, date of birth, credit card information, and social security number. More recently, this has also come to include one’s facial and voice recognition data. The focus of data privacy is often on how to safeguard personal data against malicious activity of criminals. Once scammers have a hold of someone’s personal data, they could use that information to send them robocalls and emails, hack into their social media accounts, gain information about their family members, and even access their banking and credit card information. Hackers can potentially access banking and credit card information using the personal data of their victims. But being the target of malicious activity by criminals is not the only data privacy concern people need to be aware of. Many reputable companies that play a large role in our daily lives — such as Facebook, Google, and Uber — have been found to be negligent in their handling of the personal data of their users. All three of those corporations have experienced a massive privacy breach. Instead of alerting users that the security of their personal data had been compromised, they decided to keep the news to themselves upon discovering their respective privacy breaches. It was not until much later that the companies finally admitted to their mistakes, either by their own admission or through the emergence of a whistleblower. This has brought to light the topic of corporate responsibility in protecting users’ privacy, and there is now a lot of discussion regarding how much responsibility corporations must bear, and how much users themselves must be held accountable for. Big companies collect huge amounts of data on their users, but are not always responsible in their handling of users’ personal information. Case Study: FaceApp Developed by Russian company Wireless Lab, FaceApp is a mobile app that uses the picture of the user’s face to create realistic transformations of their face using artificial intelligence. The company’s website brands the product with the slogan: “Transform your face using Artificial Intelligence with just one tap”. Using FaceApp, users can upload a picture of their face and instantly receive a realistic image of how they would look either older or younger. While the app’s capabilities are innovative and entertaining, the company’s use of users’ data has been more than questionable. Using FaceApp, users can transform their face to see how they would look with different ages and genders. Image from: FaceApp The controversy surrounding FaceApp is regarding the fact that the app’s creators are harvesting the metadata from users’ pictures. The terms and conditions (as of November 2019) on FaceApp’s website state: “FaceApp, its Affiliates, or Service Providers may transfer information that we collect about you, including personal information across borders and from your country or jurisdiction to other countries or jurisdictions around the world. If you are located in the European Union or other regions with laws governing data collection and use that may differ from U.S. law, please note that we may transfer information, including personal information, to a country and jurisdiction that does not have the same data protection laws as your jurisdiction. By registering for and using the Service you consent to the transfer of information to the U.S. or to any other country in which FaceApp, its Affiliates or Service Providers maintain facilities and the use and disclosure of information about you as described in this Privacy Policy.” Users’ photos may be transferred to foreign countries and used for unknown purposes there, all without the knowledge of the users themselves. The more alarming news is that FaceApp is not the only app, or even only photo-editing app, with weak privacy protections. Granted, a lot of the concern over FaceApp being able to harvest such data from their users is the fact that the company behind the app is Russian. Due to the close relationship between Russian companies and the Russian government, the fear is that the Russian government will collect this user data and use it for its own purposes. However, despite the fact that tech companies and users in the US enjoy a higher level of legal protection from the government in general, users of American-made apps should still be cautious of how their data could be exploited. While American companies such as Snapchat and Instagram try to comply with privacy laws in general, incidents such as the Facebook-Cambridge Analytica scandal show that large tech companies can still fail at protecting the privacy of their users’ data. Regardless of the origin of the company, app users should be cautious of how their information could be used, even by the largest and most reputable of companies. How to Protect Your Data In this day and age, it is important to know how to safeguard one’s personal information. While there are data privacy regulations in place, such as the General Data Protection Regulation (GDPR) that protects netizens in the European Union, companies cannot be trusted to always follow the rules. Also, cyber-criminals will simply ignore data protection laws, or they will operate from countries where these protections do not exist. So how should users go about protecting their personal data? Use passwords for all devices Ensure passwords are both strong and changed regularly The same password should not be used more than once Free WiFi services should be avoided Privacy and sharing settings on social media accounts should be checked regularly, and inactive online accounts should be closed Ultimately, users may never be 100% safe from cyber-criminals, but staying aware of the latest scam tactics and knowing how to protect personal data against these scams will reduce the risk of becoming victimized by cyber-crime. While it is almost impossible to be integrated into today’s digital society without giving out personal data to big tech companies, being aware of these companies’ policies, and being informed of the latest tactics used by cyber-criminals will help users to understand how best they can protect their personal information.
https://medium.com/ds3ucsd/the-basics-of-data-privacy-fd884e92ad98
['Seth Lee']
2019-12-04 20:42:48.252000+00:00
['Technology', 'Personal Data', 'Data Science', 'Privacy', 'Data Privacy']
423
Dead Rising 4 is One of the Best Games That Killed a Franchise
I love Dead Rising. It’s a phenomenal zombie action open world franchise, with a design that was a bit ahead of market norms. Dead Rising is a large-scale single-player-centric open world game. It casts you as a man of action surrounded by the undead. You have just your wits, a pile of weapons made from everyday objects, and your ever-growing skill tree at your disposal. Most of the games also make wonderful use of time, unlike nearly every other game set in a large world. Time is always ticking along, and the story advances whether you make it to where you’re supposed to be or you don’t. Challenging bosses with weird backstories, a quirky sense of humor, and the thrill of trying new weapons propel you through the multiple playthroughs required to see everything. Dead Rising 4 mixed things up a little bit. It still features stalwart off-and-on-again protagonist Frank West, a cynical journalist who is really good at killing zombies. He’s older in Dead Rising 4, and more snarky, and now voiced by film and TV actor Ty Olsson. He brings some good energy to the role, even though the change of casting was considered a mistake by some. The biggest design change to Dead Rising 4 was the heavy reduction of the time mechanic. Again, I love the time limit in Dead Rising. I linked an entire article I wrote about it earlier in this piece. It gives the games a desperate panic-inducing feel that helps immerse you in the experience of surviving a zombie outbreak. It also reminds me of old FMV games that relied on a clock, like Night Trap.
https://xander51.medium.com/dead-rising-4-is-one-of-the-best-games-that-killed-a-franchise-cc22769839a8
['Alex Rowe']
2019-12-24 23:52:37.504000+00:00
['Marketing', 'Gaming', 'Videogames', 'Technology', 'Zombies']
424
A Product Manager’s Guide to APIs
A Product Manager’s Guide to APIs Image by Fiverr We live in a world where technology reigns and data presides at every corner. As users of many different products, we’re no longer looking for the best product to get the job done, we’re now looking for the product that gets the job done AND works seamlessly with all other products we use. Therefore, it has become increasingly important to understand tools that simplify workflows and integrate components to provide a seamless user experience for our customers. Since data has become abundant, innovative teams have grown exponentially better at forming these links and connections to simplify workflows via APIs. What is an API? An Application Programming Interface, or API, is in its simplest sense a technology that connects two systems. Here’s an analogy explaining it: You go to a large library and are looking for ‘The Da Vinci Code’ by Dan Brown. But the library is huge and you have no clue where to find this book. Luckily for you, you see the librarian at their desk and a catalog with the types of books you can borrow. You request for ‘The Da Vinci Code’ from their fiction list. The librarian walks through the labyrinth of shelves to find the book and brings it to you. Photo by j zamora on Unsplash In this analogy the library is the database — one of the systems involved — one of the systems involved the books are the data you are the requestor — the application/system looking for information — the application/system looking for information the librarian is the API — they take the request back to the database and return information back to the requesting application — they take the request back to the database and return information back to the requesting application the request for the book is the call made to the API made to the API the catalog represents the specific format the request has to follow so the API understands it so the API understands it the book you received from the librarian is the response By Sachin Jain for ByteNBit In the simplest of terms, that is what an API does: it acts as the interface between two applications and facilitates information transfer while ensuring speed and security. It is a developer-centric tool — APIs are built by and for developers as part of the application code, however, this doesn’t mean they can’t drive value for the end user. How do they work? Let’s take a real life example. You are on LinkedIn and you are trying to find some companies in the fintech space. You type out your keywords (industry, location, etc.) and LinkedIn executes the search to display relevant results from its database of thousands of companies in less than a second. This request to look up all the relevant companies almost instantaneously is likely facilitated by LinkedIn’s Company Search API (which is also available for use by external applications). This type of request-response interaction can occur within a product or with external products and is used to facilitate information transfer ranging from financial payments data to location data, in order to provide a seamless experience for the end user. Why are they important to product managers? APIs open up a world of opportunities to build a more-integrated product to provide more value for your users. As a product manager, it helps to understand the benefits and the constraints that also come with the technical solutions built/leveraged by the product team, so you can make and communicate product decisions and strategy effectively. I’m not saying you have to understand APIs to the point where you can dig right into the code — instead you should be able to understand the value it provides your user and business so that you can identify and test if there is a need for it and communicate WHY this is necessary to all stakeholders/partners of your product. Using APIs built by others I’ve created a really simple, short list of the ‘Good’ and the ‘Bad’ items to consider when thinking about using APIs built by others. The Good: It can provide a well-integrated and simple experience for the user If you leverage other APIs, you can focus on building features that address the core need for your product, while maintaining a simple and complete flow for the user It can reduce the effort to implement specific features The Bad: Some cost money (based on the volume of requests you’re making) You create a dependency on another system — if the other system changes their API call, you have to make changes on your end to ensure your user experience isn’t interrupted Building an API for others to use There are a few key items to consider before undertaking this: Is this something users want or need? If so, are they likely to use it? This can be validated quickly by talking to customers or even with landing pages to evaluate the demand What value is this driving for our business? Are we expecting an increase in revenue? or a higher conversion rate? It’s also important to ensure that we are targeting metrics in line with our product strategy and our organizational goals Can we even do this? Is our business ready to support the cost of building it out and also the cost of maintaining the infrastructure to provide this service for our users? Implementation Once you’ve identified the need for leveraging or building an API, it’s also important to understand how you break this down into user stories for implementation while maintaining focus on your user persona’s goal. It may help to split the user story into two — one for the API and one for the UI functionality/piece. However, you could include it all in a single user story and call out the API criteria specifically in the Acceptance Criteria. The approach should be discussed with the product team to see what works best for them. Who is the user? This could be developers, testers of an application or technical folks on specific internal teams. It is important to really understand and isolate your user and have a clearly defined persona for each type of user to provide context on their specific needs and goals. For example — if your user is a developer, make sure you either define the specific type of developer (front end/back end/developer looking to incorporate payment functionality) or have a clearly defined user persona the team can refer to. What is their need and their goal? For this portion, you need to understand the functionality that is expected (create, read, update, delete) by the user and tie that back in to what the user is trying to achieve. Your final user story could go something like this: As a developer working on payment systems, I want to fetch the customer’s relevant financial data when I send a customer identifier, so that I have the information from the customer to process the financial transaction. Acceptance Criteria I’d include any mandatory fields that need to be a part of the API requests and the key fields that you need from the API response. If this connects in to the front end, it’s also important to call out those elements here to ensure its tackled by the team. If you’re integrating with an existing API then it’s best to read all the API documentation upfront and ensure it’s attached to the stories for everyone’s reference. Another section to include would be API criteria that is not directly associated with the actual function of what the API is trying to achieve, but could impact the end customer. This could include the number of requests to be handled per unit of time, the time it takes to send out/receive a response, any authentication/authorization information expected by the API, limits on the volume of data sent in a single API call — it’s best to call this out explicitly and tackle this with the technical team, especially if these items can impact the end customer experience. Few more things to learn about APIs… Now this section includes more details that you could say are teetering on the edge of the more technical side of things. I’ve used this section to detail areas I’m familiar with through work and also explore areas new to me. Public vs Private APIs largely fall into 2 buckets: Public and Private. Public — these are APIs open for use external to a company. Key examples are the Google Maps API to leverage Maps functionality in your product, Braintree API to leverage payments functionality in your product or the LinkedIn Company Search API to add company search functionality in your product. The goal is to be able to share information for use cases being tackled by other companies, that can end up benefitting your end user (eg. Open Banking in the world of financial data) Private — these are APIs developed for internal use for developers/applications with specific access. Key examples are customer information APIs within a company that can be used by different business functions within that company for obtaining customer information pertaining to the problem they are tackling. Types of APIs I’ve only listed the two key types (there are other older formats that are not used as commonly today) REST (Representational State Transfer) — this is currently the most common type of API. REST represents a specific API architecture and these APIs generally use HTTP functions to make/receive requests (with information typically being sent/received via JSON files). A key thing to note about REST APIs is that they are flexible in terms of the types of data they return/take and have a low bandwidth. There are 4 key HTTP functions these APIs are based on: POST, GET, PUT, DELETE (performing create, read, update and delete functions respectively) SOAP (Simple Object Access Protocol) — this is an actual protocol. SOAP APIs generally use HTTP and XML. These APIs are often heavier in terms of bandwidth and payload. Webhooks Webhooks get a special mention because they can be considered a special type of API — the simplest way to describe them is a “reverse” API. With APIs, the data transfer will not happen unless a request is made explicitly; webhooks on the other hand trigger the data transfer based on an event (such as a payment being received, an update to a user’s feed, etc.) which can be really beneficial to automatically trigger a downstream set of events. Where do we go from here? This was just a short intro into the world of APIs and how it could play into conversations with your stakeholders as a Product Manager. Hopefully, it’s equipped you to identify opportunities to optimize your product and also understand the numerous items the product team has to consider when dealing with APIs. There’s still a lot more that comes into play when building or using APIs, ranging from understanding API documentation to figuring out how you authenticate users of your API, and this means there’s a lot to be learnt if you’re interested. Time to dive right into it!
https://medium.com/swlh/a-product-managers-guide-to-apis-c5fffff0e5e0
['Akshayaa Govindan']
2021-04-27 02:35:16.006000+00:00
['API', 'Optimization', 'Product Manager', 'Technology']
425
How TECHNOLOGY took out the headache of Ghana’s 2020 election
image by Francis Kokoroko Digital transformation is getting more trendy among policy-makers, economists, and industry leaders about its societal impact. We will all agree that for some time now, we have been in a new stage of transformation where corporations and countries are focused on equipping themselves with advanced technologies and new business models to catch up with the fast-changing world. The question is, is this digital transformation making a positive contribution to our society? In my view, it is. Information Technology is playing an incredible role in making human lives easier. It has added value to society and has also been of great help in mitigating important issues, including public health, our environment and biodiversity, and democratic processes. A case in point is the crucial role it played in the just-ended election in Ghana. As evident in the recent elections held on December 7, 2020, Information technology was used as a solution for many electoral hurdles, such as the creation of an accurate voter register, simplified voting and result tallying, faster transmission of election results, etc Another example was the adoption of social media as a medium of campaigning by most political parties. The usually crowded rallies couldn’t be organized because of the COVID-19 pandemic that made such gatherings impossible. One could see adverts across social media channels of political parties trying to get their campaign message across. Social media was also used as a tool to promote peace. Hashtags such #Ghanaforpeace #peacefullelection2020 amongst others were created to preach peace. Again, the Electoral Commission (EC) of Ghana’s use of another biometric verification (facial recognition) method made the whole process even easier. It solved the thorny issue of manual verification when the thumbprint fails creating confusion amongst party observers This addressed the hustle people go through to vote, joining long queues, etc, a challenge which was prevalent in the previous election. We can confidently say that Ghana passed yet another democratic test when it conducted what has been described as the best organized general elections in the country’s history with the help of digitalization and the use of modern technology. With the way digitalization is helping solve major problems around the world, I see the future as promising. There is a need for everyone to be abreast with modern technology to fit in the digital future. With the introduction of technologies like Robotics and IOT, Data Analytics, Data Science, Digital Marketing, etc, we all can find a space within our interest to equip ourselves with the skills needed to fit in. Written By Imelda Badoe
https://medium.com/@openlabs/how-technology-took-out-the-headache-of-ghanas-2020-election-8c08f2f96530
['Niit Openlabs Ghana']
2020-12-17 16:27:10.334000+00:00
['Data Science', 'Election', 'Information Technology', 'Digitalization', 'Digital Transformation']
426
How Sensors Improve Transmission and Distribution Network Management
While distribution and transmission network inspection has largely been a manual process, electric utilities are using sensor technology to achieve improved levels of speed and accuracy. FREMONT, CA: Power delivery systems are among the most important and most diverse, remotely located investments. There are several challenges that electric utilities face with their cable and distribution assets. The industry is in its effort to deploy sensor technologies and therefore the associated innovations needed to assist electric utilities in addressing challenges associated with transmission and distribution. The implementation of sensors within the cable and distribution networks will leave the monitoring and communication of kit conditions continuously. Knowing that a cable or distribution asset is within the risk of failure will allow actions to be taken to deal with the security of both utility personnel and staff. as an example , with sensor-based monitoring, the interior discharge activity of transformers might be detected and communicated, allowing barriers to be put in situ to scale back the danger for the personnel until maintenance action is taken. Improved knowledge of transmission and distribution equipment conditions and stresses that they’re subjected to will allow asset managers to manage the distribution network better. Sensor data are often used with performance information, failure database, and operational data to allocate resources better. The rating of transmission components is influenced by several factors, including weather , loading history, and component configuration. to deal with this complexity, sensors are often used, and utilities can gain real-time knowledge of the component condition. After an occasion takes place, investigating teams often have less information to know the basis cause. This limits the power to deal with similar situations or to enhance strategies. But sensor deployment in transmission and distribution networks could provide the knowledge needed to spot the basis cause and even help understand the patterns in order that the strategy are often modified to stop future occurrences. By quickly identifying potential problems, electric utilities can address areas in transmission and distribution where safety concerns are most imperative, and therefore the infrastructure might be in danger otherwise. These proactive initiatives will allow utilities to successfully kick starter numerous efforts to enhance the resiliency and reliability of their distribution and transmission networks. Follow us In Social Media https://twitter.com/AppliedReview/
https://medium.com/@appliedtechnologyreviewew/how-sensors-improve-transmission-and-distribution-network-management-9a071217d033
['Applied Technology Review']
2020-11-16 09:55:28.356000+00:00
['Technews', 'Networking', 'Network', 'Technology', 'Sensors']
427
Why End-to-End Testing is Important for Your Team
Why End-to-End Testing is Important for Your Team How our team implemented end to end testing in 4 easy steps At Hubba, our business needs are always evolving and the speed of development needs to catch up with it. One of the ways to keep the team moving forward without breaking everything is End-to-end (E2E) testing. Having a full test suite with E2E tests allows us to move quickly. It allows developers to push code without worrying about breaking things. It enables releases with extra confidence. And, it catches errors that are missed during manual regression testing. What is E2E Testing? End-to-end testing is where you test your whole application from start to finish. It involves assuring that all the integrated pieces of an application function and work together as expected. End-to-end tests simulate real user scenarios, essentially testing how a real user would use the application. An example for Hubba’s case would be an E2E test case for a user sign up. The test would involve: opening Hubba in a browser and searching for certain elements performing a set of clicks and keyboard types ensuring that a user is successfully created Why Should You Care? At Hubba, we strongly believe in test automation. We currently write unit tests and integration tests for our code. These tests are used to: specify our system prevent bugs and regression perform continuous integration Furthermore, these tests run as frequently as possible to provide feedback and to ensure that our system remains clean. The motivation for an additional layer of E2E tests lies in the benefits of having a fully automated test suite. These benefits include increasing developer velocity, as well as other benefits previously mentioned. E2E tests allow us to cover sections of the application that unit tests and integration tests don’t cover. This is because unit tests and integration tests only take a small piece of the application and assess that piece in isolation. Even if these pieces work well by themselves, you don’t necessarily know if they’ll work together as a whole. Having a suite of end-to-end tests on top of unit and integration tests allows us to test our entire application. The faster code fails, the less bugs we find in QA, the faster our QA cycles are -Edward Robinson This is a testing pyramid from Kent C. Dodd’s blog which is a combination of the pyramids from Martin Fowler’s blog and the Google Testing Blog. The majority of your tests are at the bottom of the pyramid. As you move up the pyramid, the number of tests gets smaller. Going up the pyramid, tests get slower and more expensive to write, run, and maintain. We want to write a very little amount of end-to-end tests due to the fact that they are slow to run and are expected to change. This is especially important because as a startup we want to move fast. Google often suggests a 70/20/10 split: 70% unit tests, 20% integration tests, and 10% end-to-end tests. The exact mix will be different for each team, but in general, it should retain that pyramid shape. - Google Testing Blog 4 Steps to Get Started 1. Choose a Testing Framework The first action we took to get started was to evaluate various E2E testing frameworks. Our evaluation does not include looking at all of a framework’s features, but more of a high-level impression. The main criteria was to pick a framework that was easy to set up and quick to get started. We did a quick run through of the following frameworks: CasperJS, Protractor, Nightwatch, and Testcafe. We made the decision to go with TestCafe because of the easy installation and launch. It is fairly new but getting popular. Most noteworthy, it is easy to set up because it doesn’t require WebDriver. Due to the fact that WebDriver wasn’t required, there was no need to install and maintain additional products and browser plugins. Tests can be run right after npm install. This allowed us to quickly write a proof of concept/prototype that gets us up and running. Running a sample test in Safari TestCafe uses async/await and ES2017 code for the tests files. It also has an implicit auto-waits mechanism which means TestCafe automatically waits for XHR requests and page loads. So you don’t need to take care of it in your code. Pure Node.js - TestCafe is built on top of Node.js and doesn’t use Selenium or need plugins to run tests in real browsers. It integrates and works great with modern development tools. No additional setup or configuration - TestCafe is all set to run tests right after npm install . . Complete test harness - With a single launch command, TestCafe starts the browsers, runs tests, gathers results and generates reports. 2. Pick the Important Tests The second step was to determine the core test cases we would write for our application. One of our pain points revolves around QA regression testing. Our quality assurance (QA) cycle consists of manual testing that includes a regression test at the end. These regression tests is a manual process that take a long time and can potentially miss things due to human error. Hubba’s Login We decided on writing test cases related to those regression tests. For Hubba, this included basic - but important functionality like user sign up/login, and creating a product. The initial batch of test cases: Brand/Buyer/Influencer sign up Login Create a product 3. Integrate into a CI/CD Pipeline The next step was to integrate this into a Continuous Integration and Continuous Deployment, or CI/CD pipeline. The goal of adding E2E tests to our pipeline is to catch any errors or failing tests before code is shipped to production. We thought of two different ways of integrating this into our system. Running the tests every time new code gets pushed to the project. Running the tests periodically. Jenkins We decided on running our E2E tests on a periodical nightly/weekly basis versus executing the tests on every code change as part of the CD pipeline. The reason for this is because E2E tests are slow to run. We don’t want these tests to slow down our pipeline since it will delay our process and cycle and affect pull requests, merges, and deployments to different environments. We wanted a set of core E2E tests we can run on a regular basis that lets us know if anything is off or broken. This is why we decided on running these tests on a nightly basis via a Jenkins cron job. 4. Create a Proof of Concept/Prototype The last step was to create a proof of concept or prototype to show the E2E tests running and then incorporate them into our system. We also had to decide to either completely integrate the E2E tests into our current code base or to have a one-off project that is separate from the main code base. For the initial prototype, we went with having a new repository isolated from our main code base and running our tests in the staging environment. In conclusion, while E2E tests are very expensive to maintain, we believe that they are highly valuable as they are an excellent analogue to user behavior which helps us test basic user functionality on Hubba.
https://medium.com/free-code-camp/why-end-to-end-testing-is-important-for-your-team-cb7eb0ec1504
['Phong Huynh']
2017-12-08 16:12:57.034000+00:00
['Testing', 'Programming', 'Software Development', 'Code', 'Technology']
428
This Single Investing Rule Made Peter Thiel Billions
This Single Investing Rule Made Peter Thiel Billions Seek Monopolies, Avoid Competition Photo by BP Miller on Unsplash Peter Thiel, who turned into an entrepreneur and a venture capitalist after he succesfully co-founded and sold PayPal, has just seen his net worth jump to over $5 billion dollars in recent months. This is quite an achievement having started with ‘only’ $10 millions after Paypal was sold to eBay in 2002. He developed his investing strategies in a controversial book called Zero to One where he states that competition is for losers. In this article, we look into how Peter Thiel used this statement as a key investing mantra which helped him score big in Airbnb, Palantir, Facebook and SpaceX amongst other successful investments. Avoid markets where there is perfect competition Perfect competition is considered the default state of any market. Here, companies compete against each other with products that are barely differentiated. Market price is determined by the interaction of supply and demand. The higher the margins in any given market, the higher the incentive for any new company to enter the market. When a market becomes too competitive, both prices and margins drop. Peter Thiel states that, under perfect competition, in the long run, no company makes an economic profit. Peter Thiel states that, under perfect competition, in the long run, no company makes an economic profit. Seek monopolies Peter Thiel defines a monopoly as the opposite of perfect competition. A monopoly dominates a given market and controls prices. Given customers have no alternatives, monopolies face inelastic demand, meaning they can increase prices without significantly impacting demand. A good example of such business model is Google. The US tech company holds more than 70% market share in the web search market. It has such strong dominance that the word ‘google’ has made it in the Oxford English dictionary as a verb. However, to downplay its dominance, the company labels itself as an advertising company or more broadly a technology company. In fact, within these industries, Google only owns a small market. These tactics help the company downplay its monopoly in web search given economists and the media typically give monopolies a bad press. Peter Thiel stresses that one should seek creative monopolies which are corporates that continue to innovate to make their dominance long-lasting. Big wins in Palantir, Facebook, Space X and Airbnb Acquiring stakes in monopolies has been a key investing mantra for Peter Thiel over the years and has helped him achieve spectacular returns. In the early 2000s, he invested $500,000 in a Harvard drop-out kid’s business called Facebook which turned out to be the largest social media platforms out there. Peter made most of his wealth from investing in Palantir which labels itself as a big data analytics company but is in reality a counter-terrorism and fraud investigation data company. Other investments include SpaceX, the dominant space company in the US and Airbnb, the largest rental online marketplace globally. Yes, Peter Thiel has a big advantage of being able to take positions in monopolies at an early stage. But, for the rest of us who don’t have access to these deals in the private market, we can use his investing mantra to invest monopolies as soon as they become listed. Gain Access to Expert View — Subscribe to DDI Intel
https://medium.com/datadriveninvestor/this-single-investing-rule-made-peter-thiel-billions-deb912c1973b
['Adam Aya']
2020-11-25 15:18:02.973000+00:00
['Money', 'Investing', 'Finance', 'Stock Market', 'Technology']
429
Scope of Blockchain in BIM
By Paras Taneja for Autodesk University Building Information Modeling (BIM) gives architecture, engineering, and construction (AEC) professionals insight and tools to more efficiently plan, design, construct, and manage buildings and infrastructure. However, payments are a challenge and there is a lot of paperwork involved. BIM software should not be limited to generating 3D models and could serve to compile all project information and documentation, such as change orders, invoices, and payment records. Blockchain can address issues surrounding secure access to the model and allow for a reliable audit of who made changes, when they were made, and what those changes were. Contractual processes that typically require human intervention and oversight can be partially or fully automated with smart contracts, originating from blockchain technology. In this article, I explore technological advancements that are disrupting the AEC industry and share my takeaways about the convergence of blockchain and the industry so far. What Is Blockchain? Blockchain is a decentralized universal ledger that runs off a network of computers that jointly manage a database. It also fosters smart contracts. Whenever an entry is made, it gets added on as a block in an already existing chain of entries after being authenticated by everyone in the blockchain. This chain is also protected by the best cryptography algorithms available, so it’s very difficult to hack. All the blocks are immutably linked to the previous block. So, if a hacker wants to change one block, he/she has to change all the blocks in the chain, which is next to impossible. Also, as this is a chain, we can trace back any event block by block. So, super security comes with this technology. Blockchain. Image courtesy of analyticsinsight.net. Let’s say there are 10 people who decide to contribute $1,000 each to a lucky draw. The winner of the draw gets $10,000. For this draw, nobody is in higher authority and there is no bank. If you entered the draw, who do you trust? You have to trust every other participant equally and you are also aware if they want to defraud you, most or all would have to be convinced for an action like that. Therefore, the concept of centralized trust changes to distributed trust. This distributed trust is the heart and soul of blockchain. How Does Blockchain Relate to Construction? BIM over the last decade and a half has totally transformed the AEC industry. How the AEC industry works now is totally different from how it worked during the 1990s when one compares some aspects of designing, drawings generation, communication, collaboration, and information exchange. BIM gives AEC professionals insight and tools to more efficiently plan, design, construct, and manage buildings and infrastructure but payments are a challenge and there is a lot of paperwork involved. BIM software should not be limited to only generating 3D models but could also be served to compile within the software all project information and documentation, such as change orders, invoices, and payment information. Blockchain can address issues surrounding secure access to a BIM model and allow for a reliable audit of who made changes, when they were made, and what those changes were. Contractual processes that typically require human intervention and oversight can be partially or fully automated with smart contracts, originated from blockchain technology. Smart contracts refer to computer protocols that set out the terms and conditions of a contract. A smart contract helps the contractor to get paid immediately once the work is completed, after being verified by all the parties, automatically without any invoicing process. Regularly all the information regarding status, changes, etc., is sequentially uploaded and compiled into immutable and time-stamped blocks. These blocks are linked together and the linked information is constantly verified on a peer-to-peer basis by other linked block users in the blockchain. This eliminates the need for a third party to verify the accuracy of linked information and serves as a single source of truth. Effectively, once a block of information is created in this chain it can never be altered without the knowledge of other users in the chain. It is also very simple to verify when a block was created and by whom, and when changes have been made in the blockchain. Blockchain nodes. Image courtesy of Brickschain. What if you never had to submit an invoice again, but still got paid, instantly, the minute you finished your next job? That’s the kind of promise blockchain technology holds for the AEC industry. For example, if a steel fabricator is ready to ship the steel components to the job site, he would log this information in the BIM software. The smart contract is linked to the BIM model and the project account is funded by the owner. Once the components have been delivered to the job site, the project manager would confirm having received the component within BIM. Automatically funds would get transferred from the project account to the steel fabricator’s account. The exchange of invoices and supplementary documentation in support of a payment claim could also be completed automatically depending on the extent to which such functions are programmed within the smart contract. Where supply chain management is concerned, if your supply chain is not in sync, then your project is going to suffer. There will likely to be delays, which will lead to a lapse in productivity, cost overruns, and an unhappy client. Blockchain in this context can help trace physical items from origin to final destination. It can help improve transparency, which in turn would help involved parties stay on the same page and avoid potential pitfalls and oversights. Challenges to Blockchain Implementation Will blockchain be one of those technologies implemented overnight by construction companies? Most likely, no. In fact, there’s a long way to go, as the digital revolution in construction is just beginning. There’s a long road ahead before blockchain becomes the norm in construction. Here’s a look at why blockchain in construction still has some obstacles to overcome: Skepticism — Even the most tech-savvy contractors might be hesitant to adopt blockchain into their day-to-day operations. Even the most tech-savvy contractors might be hesitant to adopt blockchain into their day-to-day operations. Culture — Blockchain technology is a nontraditional approach to asset transactions. It isn’t easy to adapt, even for the organizations that are now digitized. It takes time, effort, and outstanding knowledge transfer to accomplish it. Blockchain technology is a nontraditional approach to asset transactions. It isn’t easy to adapt, even for the organizations that are now digitized. It takes time, effort, and outstanding knowledge transfer to accomplish it. Lack of resources — In order to implement blockchain into everyday operations, a variety of complex systems need to be created. This comes at a cost, for the systems and to hire people required to create and implement them. In order to implement blockchain into everyday operations, a variety of complex systems need to be created. This comes at a cost, for the systems and to hire people required to create and implement them. Market readiness — Is the market ready for blockchain in construction? Not right now. It needs time to mature to a point where it’s more of a reality and less of a pipe dream. Is the market ready for blockchain in construction? Not right now. It needs time to mature to a point where it’s more of a reality and less of a pipe dream. Cost and efficiency — Blockchain technology is quite competent in cost reduction. But it still faces specific challenges while implementing the legacy systems. Setting up the initial blockchain infrastructure is expensive. The Bottom Line Though the blockchain future holds significant potential, very little can happen until the challenges mentioned above are dealt with. With an uncertain future ahead, a united front among all the countries and a standard set of regulations will be crucial to implement and realize the potential of these technologies. What this boils down to is very simple: blockchain has value in construction because it introduces automation, reducing the burden of administrative and financial processes. These are known to slow construction projects down significantly. Blockchain would be the first technology of its kind to revolutionize these processes. And while the industry may not be ready yet for a full-court blockchain press, know that adoption of such technology is closer to a reality than a remote possibility. Looking towards the future, blockchain will be something that we’ll be hearing a lot more about — and it’s only a matter of time before it becomes a necessity in the AEC industry. Paras Taneja is passionate about the future of Virtual Design & Construction (VDC), utilizing gaming engines, BIM, and structural computational design. As a junior consultant at Ramboll, he is responsible for several areas which include technology development and applying the appropriate tools for the right job. References Z. Turk and R. Klinc. Potentials of Blockchain Technology for Construction Management. Procedia Engineering 196 ( 2017 ) 638–645. Accessed August 21, 2020. doi:10.1016/j.proeng.2017.08.052. How Blockchain Could Mould the Future of BIM and Construction. July 6, 2018. Accessed September 14, 2020. https://www.rib-international.com.
https://medium.com/autodesk-university/scope-of-blockchain-in-bim-69c8cc3409d9
['Autodesk University']
2020-12-04 21:00:33.759000+00:00
['Bim', 'Automation', 'Construction', 'Blockchain Technology', 'Project Management']
430
This Anti-Aging Injection Might Actually Work
This Anti-Aging Injection Might Actually Work Stem cells are overhyped as a cure for everything. But they‘re finally showing promise in making elderly people stronger. Illustration by Thomas Hedger Phillip George golfs regularly, works out on a treadmill, and lifts a few weights — “not body building, just toning,” he says. Now in his 70s, the retired plastic surgeon used to get achy if he overdid it at the gym or the golf course, and often popped a few Advil or Tylenol. But George cut way back on the painkillers two years ago, after becoming one of the first patients to try a new tactic to slow aging. As part of a clinical trial led by University of Miami cardiologist Joshua Hare, George got an infusion of millions of someone else’s stem cells, the kind of multi-purpose cells that can form other types of cells. “The pain diminished dramatically,” George says. There are a lot of doctors who promise cures from stem cells. Hare isn’t one of them. At least not yet. Nevertheless, after spending his career investigating what stem cells can do for the body, Hare thinks he’s hit upon a way to reduce frailty, boost the immune system and heart, and maybe even fight off Alzheimer’s: by giving patients large doses of a certain kind of stem cell. His approach hasn’t been proven to work yet, but a review of clinical trials and discussions with scientists suggest that Hare is closer than anyone else to using stem cells to address problems caused by aging. Although stem cells are considered an incredibly promising area of research, the procedures that have been definitively shown to work are ones that have existed for decades, such as bone marrow transplants and similar interventions for cancer and some blood diseases. No new treatments using the cells have been proven to be medically effective. It’s not for a lack of trying: the National Library of Medicine lists about 1,700 ongoing studies involving stem cell treatments. But companies that sell treatments for everything from knee pain to autism and heart disease to Parkinson’s disease do not yet have scientific proof in humans. On top of that, stem cell therapy can be risky when sloppily delivered. Three older women were blinded by a treatment intended to reverse their vision loss, according to a report last year in the New England Journal of Medicine. Many others, including professional athletes and celebrities, have spent tens of thousands of dollars for treatments that have not been shown to be any more effective than placebo. In stepping up its enforcement against unscrupulous clinics last year, the U.S. Food and Drug Administration warned consumers to avoid stem cell therapies that are not FDA-approved, unless they are part of a registered research trial. “You can really consider them to be these miniature drug-delivery factories.” That was the case when Phillip George got his infusion two years ago. He was in the first clinical trial to test these cells as a treatment for aging-related frailty. He says the infusion wasn’t painful, and he doesn’t think he took any kind of a risk, even though the procedure had been tried on only about 100 people before him. George also had known Hare for 11 years, since he was on the hiring team that brought the younger doctor to Miami. So far, so good, George says: “Two years into the process, I don’t see or recognize any negatives from it.” Regeneration Your supply of stem cells naturally falls with age. This makes it harder for the body to repair damage, and can lead to inflammation. Inflammation, in turn, undergirds many of the problems we associate with old age, including frailty, heart disease, immune weakness, and Alzheimer’s, says Anthony Oliva, senior scientist at Longeveron, the privately held Miami company that Hare started to bring his ideas to market. Joshua Hare Longeveron uses mesenchymal stem cells, which come from bone marrow. They are known to be involved in regulating and sometimes reducing inflammation, as well as in helping to boost repair mechanisms for blood vessels. They may also prompt the body’s own stem cells to get more active in beneficial ways. There are currently 817 studies worldwide investigating the use of mesenchymal stem cells in people, for everything from knee injuries to ulcerative colitis, according to the federal government’s ClinicalTrials.gov. Mesenchymal stem cells are being tested in cats to see if they can reduce feline inflammation. “You can really consider them to be these miniature drug-delivery factories,” Oliva says, describing mesenchymal stem cells as almost miraculous. “They can last for many months inside the recipient. They’re homing to the site of inflammation and damage. They are decreasing inflammation. They’re promoting improvement of the vasculature. They’re stimulating intrinsic stem cells to repair and regenerate over months.” Hare’s research has cataloged how an objective measure of inflammation, the cell-signaling protein TNF-alpha, falls after an infusion of mesenchymal stem cells. In both animals and people, Hare says, the protein level remains lower for six to 12 months. The cells also appear to be safe. “Working in medicine 30 years, I’ve never seen anything this well tolerated,” Hare says. The immune system doesn’t react to these donor stem cells because it essentially doesn’t see them, Oliva says. Most cells in the body express a protein called MHC class II on their surface. The protein acts like a flag to alert the body to something foreign. In an organ transplant, the patient’s MHC class II has to match the donor’s to reduce chances of rejection. But mesenchymal stem cells don’t have these flags. Longeveron uses cells from young donors, who are paid a small stipend and have been screened three times for diseases like HIV, hepatitis, and Zika, Oliva says. Their mesenchymal stem cells are extracted in a far less painful way than you might imagine, with just local anesthesia, a single needle stick, and a Band-Aid to cover the spot. Longeveron then cultures the cells—putting them in a fluid to make them proliferate—before infusing them into a patient. For now, one donor can supply tens of patients, but Hare hopes to improve the culturing process so that one donor can provide enough cells for several hundred doses. If the company’s trials persuade regulators to approve the therapy, “we’re going to have a very large market and a very significant need,” he says. A mysterious result For its frailty research, Longeveron is now running a phase 2b trial — roughly the midpoint of the clinical trial process. It has tested its cells in about 20 out of 120 patients so far, says Suzanne Page, the chief operating officer. Longeveron also has an early trial comparing the donor cells versus a placebo in a small number of patients with Alzheimer’s disease; another trial looking at frail patients’ resistance to the flu virus after receiving the donor cells compared to placebo; and a fourth looking at metabolic syndrome. Hare is also conducting a study checking back with patients who received stem cell treatments years ago for heart conditions. To treat organ damage, like a heart attack that leaves a scar in the heart muscle, stem cells probably have to be injected directly into the organ, Hare says. But for a systemic, aging-related condition like frailty, the cells can be infused into the bloodstream because they tend to “traffic to inflamed areas,” he says. If further trials confirm Longeveron’s approach, “we’re going to have a very large market,” Hare says. Longeveron’s first published study on frailty showed that patients such as George, who received 100 million mesenchymal stem cells, showed “remarkable improvements in physical performance measures and inflammatory biomarkers.” The average age of the patients was 75.5. Oddly, the patients who received a double dose of 200 million donor cells did not see any benefit at all. And that result fuels the concerns that molecular biologist Andrew Mendelsohn has about the research. Mesenchymal stem cells have been the subject of hundreds of clinical trials, notes Mendelsohn, who wrote a commentary about Longeveron’s experiment in the journal Rejuvenation Research. Some studies show improvements, some don’t. When the studies are repeated, their results are inconsistent. “If you do the experiment in mice, you see all kinds of great results. When you get to people you get all kinds of mixed results,” says Mendelsohn, director of molecular biology for the Panorama Research Institute, a privately owned biomedical research and development holding company in Sunnyvale, California. Mendelsohn says he thinks Hare’s research has promise. But he won’t be convinced, he says, until he sees clear evidence in repeated clinical trials. “I have a strong belief that eventually it can be engineered to work out, even if it’s not nearly as effective as we might like it to be,” Mendelsohn says. Paul Knoepfler, a biologist and stem cell scientist at the UC Davis School of Medicine, says he too is troubled by the inconsistent results of stem cell studies, and by the therapy’s relatively modest benefits. Other research suggests to him that donor mesenchymal stem cells don’t stay in the body for very long, contrary to what Oliva says about how they last for months. Knoepfler is not sure how patients could have a long-term benefit if the body clears out the cells within a week or so. Furthermore, frailty and other age-related processes are complicated. They’re not caused by one trigger and they probably can’t be fixed by one type of treatment, Knoepfler says. Even so, he considers Hare’s work worthwhile and says it’s being conducted responsibly. “I don’t see any red flags,” Knoepfler says. “It’s just sort of early days.” Phillip George acknowledges that the benefits he saw might have been caused by a placebo effect. But he’s still signed up to get a second infusion of the cells, part of a test of whether it can be given multiple times without harm. “I’m not a Pollyanna. I’m not jumping at the first of everything,” George says. But if he has the opportunity to benefit from a potentially exciting new treatment, if there’s no apparent downside, and if he can benefit science in the process, then why not try?
https://medium.com/neodotlife/longeveron-stem-cells-for-frailty-aging-d94d8c9f2de6
['Karen Weintraub']
2020-10-22 23:06:16.688000+00:00
['Technology', 'Longevity', 'Aging', 'Stem Cell Research', 'Biotechnology']
431
How Should You Leverage the Cloud to Get Your Projects Done?
How Should You Leverage the Cloud to Get Your Projects Done? An introduction to cloud computing Photo by Robert Bye on Unsplash It sounds like you want to get rid of IT. That’s what one of the techs on my team said when I told him we were going to start migrating all of our on-premises systems to our AWS account. His statement hit me a bit sideways because I didn’t understand why anyone would think that way. Moving services and systems up to the cloud wasn’t ‘getting rid of IT’. Instead, it was bringing it from the past into the PRESENT. At the time, I was managing a small IT team at a growth phase startup. It had been around for about 7 years, and when I stepped in there I didn’t just see an IT department. I saw aging servers sitting on a floor. I saw overly complicated solutions to simple problems, just so a computer could sit alone in a server rack in a room with bad ventilation. I saw single points of failure everywhere. If one of the computers we had hosting four different application virtual machines had a catastrophic hardware failure, we would be in trouble. Mission-critical software was running on machines that were as old as the company, with no redundancy, no functional backups, and no plan for when things went wrong. Having all of our servers in the building was a disaster waiting to happen, and AWS was the solution to my problems. It’s just someone else’s computer, right? Running your servers in a cloud computing environment means you’re just using someone else’s computer which is stored somewhere other than your business. Do you trust that company and its computers to hold your data? Is it expensive to rent other people’s hardware when you can just buy your own and be done with it? How much more will you have to learn to effectively leverage cloud computing services versus putting an old computer in a closet with an oscillating fan blowing on it? Yes, it is someone else’s computer. Yes, you can trust them (there’s way too much money riding on them providing high-performing and tightly-secure systems for you, their customer). And yes, you’ll have to learn how to use their systems, management consoles, and figure out how to keep your bill down to a reasonable amount while getting the most you can out of their cloud. Who are the main providers? There are three primary cloud computing service providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. Those are the leaders in the industry and provide the widest variety of rock-solid solutions. There are some other players, but they are either smaller or they provide more specialized offerings. For example, if your business has been using an Oracle database for years and you really want to stick with their platform and services, they have cloud offerings for you. If you want a simple application platform without having to mess with any infrastructure setup, maybe Heroku has something to meet your needs. There are a lot of options when you start looking at cloud computing service providers, but with those options comes a lot of questions. You’re going to have to learn which provider is going to meet your needs right now and those you’ll have in the future. Choosing a cloud to stick your business in isn’t a simple decision. You’ve got to think long-term, and you’ve got to think big. It’s not just a question of where you are right now, but where are you going to be once your five-year roadmap is a success? I’ve been using AWS for a few years, so I’ll automatically lean towards it when I have a new project. That doesn’t mean it’s right for everyone or every project. Azure might be the perfect solution. Likewise, Google Cloud might be just what you need to run your AI/ML workload and drive your customer intelligence solutions to the next level. How to get started You get started by researching and experimenting with one of the three main cloud providers. Sure, there are other offerings you can choose, but I suggest you just start with AWS, Azure, or Google. They are the leaders, and they will likely be the best solution for you. But you won’t know until you get started. AWS lets you get started for free and use it for a year, as long as you don’t go over their free tier limits. Azure has a similar program, offering 12 months to sample their services. Google Cloud is a little different, but still has a free section and they give you service credits to try out their paid offerings without incurring any charges. No matter which one of the big three you choose to start your cloud experiments, you can do so for free. Don’t let aging servers sitting in a dirty room be your downfall. Just pick a cloud provider and start learning.
https://medium.com/swlh/how-should-you-leverage-the-cloud-to-get-your-projects-done-ae9c7f327b2c
['Caleb Rogers']
2019-12-09 04:55:15.840000+00:00
['AWS', 'Technology', 'Software Development', 'Cloud Computing', 'Business Intelligence']
432
AlphaFold 2 Explained: A Semi-Deep Dive
Image by Dale At the end of last month, DeepMind, Google’s machine learning research branch known for building bots that beat world champions at Go and StarCraft II, hit a new benchmark: accurately predicting the structure of proteins. If their results are as good as the team claims, their model, AlphaFold, could be a major boon for both drug discovery and fundamental biological research. But how does this new neural-network-based model work? In this post, I’ll try to give you a brief but semi-deep dive behind both the machine learning and biology that power this model. First, a quick biology primer: The functions of proteins in the body are entirely defined by their three-dimensional structures. For example, it’s the notorious “spike proteins’’ which stud coronavirus that allows the virus to enter our cells. Meanwhile, mRNA vaccines like Moderna’s and Pfizer’s replicate the shape of those spike proteins, causing the body to produce an immune response. But historically, determining protein structures (via experimental techniques like X-ray crystallography, nuclear magnetic resonance, and cryo-electron microscopy) has been difficult, slow, and expensive. Plus, for some types of proteins, these techniques don’t work at all. In theory, though, the entirety of a protein’s 3D shape should be determined by the string of amino acids that make it up. And we can determine a protein’s amino acid sequences easily, via DNA sequencing (remember from Bio 101 how your DNA codes for amino acid sequences?). But in practice, predicting protein structure from amino acid sequences has been a hair-pullingly difficult task we’ve been trying to solve for decades. This is where AlphaFold comes in. It’s a neural-network-based algorithm that’s performed astonishingly well on the protein folding problem, so much so that it seems to rival in quality the traditional slow and expensive imaging methods. Sadly for nerds like me, we can’t know exactly AlphaFold works because the official paper has yet to be published and peer reviewed. Until then, all we have to go off of is the company’s blog post. But since AlphaFold (2) is actually an iteration on a slightly older model (AlphaFold 1) published last year, we can make some pretty good guesses. In this post, I’ll focus on two core pieces: the underlying neural architecture of AlphaFold 2 and how it managed to make effective use of unlabeled data. First, this new breakthrough is not so different from a similar AI breakthrough I wrote about a few months ago, GPT-3. GPT-3 was a large language model built by OpenAI that could write impressively human-like poems, sonnets, jokes, and even code samples. What made GPT-3 so powerful was that it was trained on a very, very large dataset, and based on a type of neural network called a “Transformer.” Transformers, invented in 2017, really do seem to be the magic machine learning hammer that cracks open problems in every domain. In an intro machine learning class, you’ll often learn to use different model architectures for different data types: convolutional neural networks are for analyzing images; recurrent neural networks are for analyzing text. Transformers were originally invented to do machine translation, but they appear to be effective much more broadly, able to understand text, images, and, now, proteins. So one of the major differences between AlphaFold 1 and AlphaFold 2 is that the former used concurrent neural networks (CNNs) and the new version uses Transformers. Now let’s talk about the data that was used to train AlphaFold. According to the blog post, the model was trained on a public dataset of 170,000 proteins with known structures, and a much larger database of protein sequences with unknown structures. The public dataset of known proteins serves as the model’s labeled training dataset, a ground truth. Size is relative, but based on my experience, 170,000 “labeled” examples is a pretty small training dataset for such a complex problem. That says to me the authors must have done a good job of taking advantage of that “unlabeled” dataset of proteins with unknown structures. But what good is a dataset of protein sequences with mystery shapes? It turns out that figuring out how to learn from unlabeled data-”unsupervised learning”-has enabled lots of recent AI breakthroughs. GPT-3, for example, was trained on a huge corpus of unlabeled text data scraped from the web. Given a slice of a sentence, it had to predict which words came next, a task known as “next word prediction,” which forced it to learn something about the underlying structure of language. The technique has also been adopted to images, too: slice an image in half, and ask a model to predict what the bottom of the image should look like just from the top: Image from https://openai.com/blog/image-gpt/ The idea is that, if you don’t have enough data to train a model to do what you want, train it to do something similar on a task that you do have enough data for, a task that forces it to learn something about the underlying structure of language, or images, or proteins. Then you can fine-tune it for the task you really wanted it to do. One extremely popular way to do this is via embeddings. Embeddings are a way of mapping data to vectors whose position in space capture meaning. One famous example is Word2Vec: it’s a tool for taking a word (i.e. “hammer”) and mapping it to n-dimensional space so that similar words (“screw driver,” “nail”) are mapped nearby. And, like GPT-3, it was trained on a dataset of unlabeled text. So what’s the equivalent of Word2Vec for molecular biology? How do we squeeze knowledge from amino acid chains with unknown, unlabeled structures? One technique is to look at clusters of proteins with similar amino acid sequences. Often, one protein sequence might be similar to another because the two share a similar evolutionary origin. The more similar those amino acid sequences, the more likely those proteins serve a similar purpose for the organisms they’re made in, which means, in turn, they’re more likely to share a similar structure. So the first step is to determine how similar two amino acid sequences are. To do that, biologists typically compute something called an MSA or Multiple Sequence Alignment. One amino acid sequence may be very similar to another, but it may have some extra or “inserted” amino acids that make it longer than the other. MSA is a way of adding gaps to make the sequences line up as closely as possible. Image of an MSA. Modi, V., Dunbrack, R.L. A Structurally-Validated Multiple Sequence Alignment of 497 Human Protein Kinase Domains. Sci Rep 9, 19790 (2019). According to the diagram in DeepMind’s blog post, MSA appears to be an important early step in the model. Diagram from the AlphaFold blog post. You can also see from that diagram that DeepMind is computing an MSA embedding. This is where they’re taking advantage of all of that unlabeled data. To grok this one, I had to call in a favor with my Harvard biologist friend. It turns out that in sets of similar (but not identical) proteins, the ways in which amino acid sequences differ is often correlated. For example, maybe a mutation in the 13th amino acid is often accompanied by a mutation in the 27th. Amino acids that are far apart in a sequence typically shouldn’t have much effect on each other, unless they’re close in 3D space when the protein folds-a valuable hint for predicting the overall shape of a protein. So, even though we don’t know the shapes of the sequences in this unlabeled dataset, these correlated mutations are informative. Neural networks can learn from patterns like these, distilling them as embedding layers, which seems to be what AlphaFold 2 is doing. And that, in a nutshell, is a primer on some of the machine learning and biology behind AlphaFold 2. Of course, we’ll have to wait until the paper is published to know the full scoop. Here’s hoping it really is as powerful as we think it is.
https://towardsdatascience.com/alphafold-2-explained-a-semi-deep-dive-fa7618c1a7f6
['Dale Markowitz']
2020-12-09 16:23:33.585000+00:00
['Data Science', 'Technology', 'Python', 'Machine Learning', 'Biology']
433
A recap of 2020 in education technology
Hindsight is 20/20. Yes, it is always easier to look back and evaluate a situation while we’re looking back at them than when we’re in the present moment. This couldn’t be any more truer than the year 2020. For students, parents and teachers everywhere it was a whirlwind of an experience — the way we learn and interact in classrooms changed. The digital medium definitely came to aid to support support student learning this year. At the same time this became a source of frustration for teachers, students and parents helping with online learning. We are taking a look back at this year, what it meant for education technology and how this will affect education for years to come. Cybersecurity and Privacy We coined a new term this year. Although Zoom launched in 2013, it was only this year that ‘zoombombing’ became a commonly used word. Classrooms have experienced interruptions during online lessons as internet trolls have marched in shouting obscenities and sharing obscene images. Schools have also seen a steep rise in cyber attacks with the rise in remote access by students and teachers, which exposed the vulnerabilities of some of these networks. All of this has definitely heightened the need to prioritize cybersecurity efforts as well as prioritizing funding for these efforts. There needs to be higher focus on raising awareness about cybersecurity threats and privacy protocols, among school staff and employees and the role they need to play to keep the students safe.. Keith Krueger, CEO of CoSN (The Consortium for School Networking) rightfully said “Just getting devices and broadband connectivity, Wi-Fi, that alone is insufficient if the network isn’t usable, isn’t safe and secure.” The digital divide This year has definitely shown us the huge gap between people who have sufficient knowledge and access to technology, and people who don’t. This inequities affected student learning and exposes a divide that has been persistent in the education industry for a decade. A research by the EdWeek shows that, in districts that have 75% or more low-income students only 57% of middle-school students and 74% of high-school students have at least one device. That percentage jumps to 87% and 96% respectively for school districts that have 25% or less low-income students. As the beginning of the year threw us all into suddenly accommodating online learning, school staff everywhere struggled to get one laptop or device into the hands of each student. School district administrators everywhere scrambled to provide Wi-Fi hotspots to support student connectivity to online classrooms. Though all of this seems like a bandaid solution, it definitely is a step in the right direction. A lot of progress remains to be done in the years to come, and the digital divide will remain to be a challenge. Remote learning experiences It is no secret that in-person interaction cannot be replicated via an online video platform and that is far more applicable for students. Teachers have reported a myriad of problems. Some of them include — students are not interacting with me, students have more trouble focussing on work at home than they have at school, students have problem using technology effectively for academic purposes, it is difficult for me to tell if my students are learning or they need more help. Close to half of the teachers reported that it is difficult to provide for the needs of students with disabilities. A lot of these problems are elevated by the racial and socio-economic differences. As teachers were augmenting these shortcomings with added phone calls, extra time with the students, and focussed attention to students, an inevitable outcome was that they were slogging to keep up with extra hours on top of an already overwhelming workload. High quality remote learning is hard to do and is not a viable option at least not yet and not in-scale. The experiences of students and teachers vary depending on a lot of factors. Based on what we’ve seen, we have a lot of work to do before we consider remote learning as a long-term option. No matter how the year went, I hope 2020 was a lesson in resilience. I hope our children will come out of these times with the idea, ‘I did the best I could. I learned a few things and I will continue to learn…’
https://medium.com/@priyanka-raha/a-recap-of-2020-in-education-technology-65a70bacc34b
['Priyanka Raha']
2021-01-12 22:21:53.380000+00:00
['Education', '2020', 'Technology', 'Learning', 'Edtech']
434
Californians Can Now Track Covid-19 Exposure on Their Phones
Californians Can Now Track Covid-19 Exposure on Their Phones CA Notify app installation. Images: Gado Images California took a major step forward in Covid-19 technology today when the state became among the first in the USA to launch a customized coronavirus exposure notification app. This morning, many Californians awoke to an Android or iOS notification informing them that the app, called CA Notify, was live. Users can download CA Notify from the Google Play Store on Android or the App Store on iOS devices. It is also being pushed to your phone to prompt you to sign up. CA Notify uses your phone to track potential Covid-19 exposures as you go about daily life and to notify you of situations where you might have been exposed to a Covid-19 positive person. Because our phones are embedded with sensors already — and include location-tracking GPS and Wi-Fi tech — exposure notification apps seem like they should be easy to create with existing technologies. Sign up for The Bold Italic newsletter to get the best of the Bay Area in your inbox every week. A theoretical notification app could log your location and the locations of other users, much as Google Timeline logs the location of your Android phone already. If another user tested positive, the app could compare their location history to yours, and notify you if you spent time in the same place. It seems like a fairly easy problem for modern tech to solve. Except it’s not. For starters, an app that constantly logs your location and sends it to a centralized government database would be a privacy nightmare. Any government entity with access to the database could pull up your location history any time they wanted. The implications for biased policing, illegal searches, and other abuses would be monumental. Constantly recording your fine-grained location and sending it off to the government would also be a massive drain on your phone’s battery, data plan, and processor. Instead of storing it within the government, governments worldwide have worked together with both Apple and Google to create an innovative solution that allows for exposure notifications while keeping your phone snappy, and minimizing the risk to your privacy. The system is built on a protocol that uses the Bluetooth chip in your phone to communicate anonymously with other phones around you. If you download an app like CA Notify, your phone will constantly look for the Bluetooth signatures of other app users who are physically nearby to you. When it detects another user, it will log their presence, and record how long you spent in their proximity, and approximately how close you were. All this data is stored on your phone — it doesn’t get sent off to a central location, so the risk that it will be used to monitor your movements is minimal. Your phone will also broadcast your own signature, so other users can record when they’re been around you. If an app user tests positive for the virus, they can log this status in their CA Notify app. CA Notify will then send an anonymous message out to all other app users, with the infected user’s unique signature. This won’t be linked to their identity, so their privacy will be protected. Your phone will look at this message, and see if the infected person’s signature appears in your phone’s personal list of the people you’ve spent time around. If it does — and if you spent enough time around the infected person that you could potentially have become infected yourself — you’ll get a notification in your own CA Notify app. You can then choose to act on this however you’d like. CA Notify and the system behind it is an elegant compromise between keeping people informed and respecting their privacy (and the costs to their data plan). I downloaded the app this morning and will be experimenting with it more to see exactly how it works. The app itself is simple, with just a few tabs — Exposures, Notify Others, and Settings, which allows users to determine if they’d like to share additional performance and analytics information with the state. The app’s effectiveness will likely depend on how many Californians download it, and how well it filters out false positives. If you get a notification for every Covid-19 positive person you drive past (or everyone you’re within 100 yards of, even if they’re in a different building), people will likely get fed up and abandon the app. But if it automates the process of informing people about exposures, takes some burden off of contact tracers, and empowers citizens to manage and reduce their own exposure risk, the app could be hugely beneficial. I could see a future where exposure notifications are gamified. Just as people try to maximize their steps with a Fitbit, they could try to minimize their potential exposures by social distancing, going to businesses at less crowded times, and the like. It may be only a matter of time before a screenshot of the app showing “No Exposures” becomes a social media badge of honor, showing others that you’re taking Covid-19 measures seriously. Around 20 other U.S. states have already launched notification apps of their own, based on the same Apple/Google system. If you’re a Californian, consider checking out CA Notify today. If you’re not, check and see if your own state has a similar app, and see if it proves helpful in your own personal fight against the pandemic.
https://thebolditalic.com/ca-notify-app-turns-your-phone-into-a-covid-19-exposure-monitor-ff6b7ba3f1ab
['Thomas Smith']
2020-12-11 22:55:04.412000+00:00
['Apps', 'Public Health', 'Covid 19', 'Technology']
435
Response to Coincheck Hack
Time Machine Series Jason Lee, Australasia’s Director of Partnerships and Strategic Alliances at the NEM.io Foundation, told Kyodo News in a phone interview that on Saturday (27th Jan 2018), just hours after the hacking attack, the Coincheck team was in contact with the NEM team. Roughly 58 billion yen ($532 million) in NEM was stolen from Coincheck on Friday, marking the largest ever loss of cryptocurrency to hackers. Within 24 to 48 hours, the NEM team had developed a solution by tagging the stolen cryptocurrency as tainted funds, making them easy to verify if withdrawn from or deposited to regulated trading platforms. That, Lee said from Hong Kong, “would allow us to potentially identify the hackers.” However he conceded that “at this moment we don’t know where they are.” We’re hoping that the hackers will eventually return the money.” “We have done what we could from the NEM foundation’s side…so the only way we can track whether the hackers have done anything…is (to monitor) their next move.” He said that if the missing funds are transacted through a licensed cryptocurrency exchange then they will be able to have more information on the tokens itself. However, he added that “there could be other avenues for them to cash out” and that if the missing funds are not transacted through an exchange, “the identity will be difficult to check.” So far, Japanese authorities have been dealing directly with Coincheck in the investigations, Lee said. If asked for assistance, his foundation “would do its level best in ensuring that this investigation is dealt with full integrity and transparency.” Lee said the sole purpose of the foundation is to introduce, educate and promote the use of the NEM blockchain technology platform, which it did not itself create, on an international scale to all industries and institutions. He urged cryptocurrency exchanges to install NEM’s built-in security feature, a “multi signature verification” that requires more than one person to acknowledge the movement of funds. In Coincheck’s case, “they did not implement the multi signature verification,” he said. He emphasized that the incident has nothing to do with the NEM blockchain, which remains “one of the most secure and simple blockchain systems to use.” Originally published on: https://english.kyodonews.net/news/2018/01/27d03eee6750-nem-experts-tagged-missing-cryptocurrency-to-help-track-culprit.html
https://medium.com/nem-anz/response-to-coincheck-hack-fa13db7afad9
[]
2018-07-04 05:37:36.589000+00:00
['Announcements', 'Cryptocurrency', 'Nem', 'Blockchain Technology', 'Blockchain']
436
The impact of technology on reality
The impact of technology on reality What is truth in 21st century? Source: author I used to be one of the conspiracy theorists, but that was back in the day when we would be discussing hidden truths about UFOs, Alien abductions, and JFK assassination, and similar. Then I grew up, intellectually, and accepted that evolution in technology suddenly made it easy for various hoaxes to be presented in a hard to dispute ways, for the masses at least. “Truths” in the form of deep fakes, and usually more accessible to anyone compared to what doctored images and audio used to be in the past. One had to spend some serious money on equally serious equipment that would in return take up some real space instead of megabytes/gigabytes/etc. And, today it’s easier to manipulate and disseminate all that b.s. which, if done correctly (or incorrectly, depending on your point of view) can become the truth by simply being promoted by “trustworthy” sources of information. Then by being repeated enough times, no one can actually tell its source, and if enough people agree that it’s a fact, then it’s a fact, or you’re the crazy one. The truth What does it mean? As someone who has been enjoying learning various languages, seeing similarities, or simply curious word usages eventually lead me to become interested in etymology. With the whole chaos of what is true/false in the news, it occurred to me to look into the etymology of the word truth. The word truth has many roots. The interesting one, the one that is not as supported has surprisingly the most truthfulness to it :) See what I did there? Anyway, “deru” (from Proto-Indo-European root) means solid/firm, https://www.etymonline.com/word/*deru- and in Sanskrit “dru” translates to tree/wood, in Serbian “drvo” translates to tree as well, and I could copy paste the rest of it from the link, but the point is that from tree -> true -> truth and the logic behind its assigned meaning is: “as solid as a piece of wood/oak”. Why not question the truth? There are valid reasons why a person would accept certain truths — fear. Fear of the unknown. Another source of the word truth is: trod in Nordic which means faith among other adjacent definitions/translations. What do people usually seek from faith? A comfort. A meaning to the horrible and also wonderful events in their life, in the lives of others. Why believe the lie? It’s easier, quicker, fast food for the soul. How to invalidate truth, how to validate a lie It’s hard to prove things. Sometimes, it’s not even about the validity of the truth, but about the agreement of what is true. If you’re in a room with a hundred people and you are all looking at the same image, and you see a bird while a hundred people see a horse, well, the “truth” becomes that it was an image of a horse that you were looking at. As per the old Twilight Zone episode, the truth is in the eye of the beholder. If you are the only one who disagrees then you’re the one who is in the wrong or in other words — you should be committed. How to fight it today Challenge it. But, that requires work, and why would anyone want to work for a quick tweet or a facebook update, etc. In the end, it depends if you’re looking to spread the information or to apply it on something silly or serious. If it’s about spreading the information, you better be sure it’s the real deal. If it’s something as silly as the youtube prank clips of unlocking the car with a tennis ball, give it a go, it can’t hurt (unless if you find yourself locked out of your car). And of course, if it’s something more dangerous like… well, I guess I better not give examples, just in case :) , but you do understand that the greater the risk from something turning out to be false the more you have to make sure of its validity. How to fight it tomorrow Blockchain/bitcoin approach. Data/information that can’t be manipulated. But, I have no idea how to apply it without risking the old, who watches the watchmen, or in this case, who watches the ghost in the machine question/concern. Because, there is the dark side of this approach as well — manipulation of the data that we all have access to, but can’t verify any of it first hand, and therefore it becomes easy to miss what is true and what is false.
https://medium.com/predict/the-impact-of-technology-on-reality-ae82fd0ad14a
['Vuk Ivanovic']
2020-12-30 05:34:32.572000+00:00
['Technology', 'Internet', 'Fake News', 'Truth', 'Future']
437
How to Create a Node REST Stub with Swagger Codegen
Generating Swagger Specification document In this section, we will use a Spring boot application to generate the Swagger specification document which consists of the REST API specification. This document can then be used to share with the API consumers so that a Node JS client can be created using Swagger codegen utility. To generate the Swagger Specification, we have created a sample Spring boot project. It can be downloaded from this Github link. This project is about book management service and exposes following REST endpoints: GET v1/books/: List all books POST v1/books/: Create a new book GET v1/books/{book_id}: Get a book resource DELETE v1/books/{book_id}: Remove a book Download this project and set up in your favorite IDE. To generate the swagger document, we will update the pom.xml file with the springfox-swagger2 and springfox-swagger-ui maven dependencies as shown below: <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger2</artifactId> <version>2.9.2</version> </dependency> <dependency> <groupId>io.springfox</groupId> <artifactId>springfox-swagger-ui</artifactId> <version>2.9.2</version> </dependency> Following this, we will add the below Swagger configuration. This configuration let Swagger detect the REST endpoints as well as adds some metadata about the API owner (API Name, email, website and so on) SwaggerConfiguration Start the application and head over to the http://localhost:8080/v2/api-docs. This shows the Swagger OpenAPI specification document. Save this JSON file as swagger-book-api.json. Alternatively, we can also view the REST endpoint information at http://localhost:8080/swagger-ui.html.
https://medium.com/swlh/how-to-create-a-node-rest-stub-with-swagger-codegen-b419080559cf
['Somnath Musib']
2019-12-16 22:01:02.316000+00:00
['Technology', 'Software Development', 'Java', 'Software Engineering', 'Programming']
438
Elite 4x01 S4 (Episode 1) On [Netflix’s] 2021
⭐A Target Package is short for Target Package of Information. It is a more specialized case of Intel Package of Information or Intel Package. ✌ THE STORY ✌ Its and Jeremy Camp (K.J. Apa) is a and aspiring musician who like only to honor his God through the energy of music. Leaving his Indiana home for the warmer climate of California and a college or university education, Jeremy soon comes Bookmark this site across one Melissa Heing (Britt Robertson), a fellow university student that he takes notices in the audience at an area concert. Bookmark this site Falling for cupid’s arrow immediately, he introduces himself to her and quickly discovers that she is drawn to him too. However, Melissa holds back from forming a budding relationship as she fears it`ll create an awkward situation between Jeremy and their mutual friend, Jean-Luc (Nathan Parson), a fellow musician and who also has feeling for Melissa. Still, Jeremy is relentless in his quest for her until they eventually end up in a loving dating relationship. However, their youthful courtship Bookmark this sitewith the other person comes to a halt when life-threating news of Melissa having cancer takes center stage. The diagnosis does nothing to deter Jeremey’s love on her behalf and the couple eventually marries shortly thereafter. Howsoever, they soon find themselves walking an excellent line between a life together and suffering by her Bookmark this siteillness; with Jeremy questioning his faith in music, himself, and with God himself. ✌ STREAMING MEDIA ✌ Streaming media is multimedia that is constantly received by and presented to an end-user while being delivered by a provider. The verb to stream refers to the procedure of delivering or obtaining media this way.[clarification needed] Streaming identifies the delivery approach to the medium, rather than the medium itself. Distinguishing delivery method from the media distributed applies especially to telecommunications networks, as almost all of the delivery systems are either inherently streaming (e.g. radio, television, streaming apps) or inherently non-streaming (e.g. books, video cassettes, audio tracks CDs). There are challenges with streaming content on the web. For instance, users whose Internet connection lacks sufficient bandwidth may experience stops, lags, or slow buffering of this content. And users lacking compatible hardware or software systems may be unable to stream certain content. Streaming is an alternative to file downloading, an activity in which the end-user obtains the entire file for the content before watching or listening to it. Through streaming, an end-user may use their media player to get started on playing digital video or digital sound content before the complete file has been transmitted. The term “streaming media” can connect with media other than video and audio, such as for example live closed captioning, ticker tape, and real-time text, which are considered “streaming text”. This brings me around to discussing us, a film release of the Christian religio us faith-based . As almost customary, Hollywood usually generates two (maybe three) films of this variety movies within their yearly theatrical release lineup, with the releases usually being around spring us and / or fall respectfully. I didn’t hear much when this movie was initially aounced (probably got buried underneath all of the popular movies news on the newsfeed). My first actual glimpse of the movie was when the film’s movie trailer premiered, which looked somewhat interesting if you ask me. Yes, it looked the movie was goa be the typical “faith-based” vibe, but it was going to be directed by the Erwin Brothers, who directed I COULD Only Imagine (a film that I did so like). Plus, the trailer for I Still Believe premiered for quite some us, so I continued seeing it most of us when I visited my local cinema. You can sort of say that it was a bit “engrained in my brain”. Thus, I was a lttle bit keen on seeing it. Fortunately, I was able to see it before the COVID-9 outbreak closed the movie theaters down (saw it during its opening night), but, because of work scheduling, I haven’t had the us to do my review for it…. as yet. And what did I think of it? Well, it was pretty “meh”. While its heart is certainly in the proper place and quite sincere, us is a little too preachy and unbalanced within its narrative execution and character developments. The religious message is plainly there, but takes way too many detours and not focusing on certain aspects that weigh the feature’s presentation. ✌ TELEVISION SHOW AND HISTORY ✌ A tv set show (often simply Television show) is any content prBookmark this siteoduced for broadcast via over-the-air, satellite, cable, or internet and typically viewed on a television set set, excluding breaking news, advertisements, or trailers that are usually placed between shows. Tv shows are most often scheduled well ahead of The War with Grandpa and appearance on electronic guides or other TV listings. A television show may also be called a tv set program (British EnBookmark this siteglish: programme), especially if it lacks a narrative structure. A tv set Movies is The War with Grandpaually released in episodes that follow a narrative, and so are The War with Grandpaually split into seasons (The War with Grandpa and Canada) or Movies (UK) — yearly or semiaual sets of new episodes. A show with a restricted number of episodes could be called a miniMBookmark this siteovies, serial, or limited Movies. A one-The War with Grandpa show may be called a “special”. A television film (“made-for-TV movie” or “televisioBookmark this siten movie”) is a film that is initially broadcast on television set rather than released in theaters or direct-to-video. Television shows may very well be Bookmark this sitehey are broadcast in real The War with Grandpa (live), be recorded on home video or an electronic video recorder for later viewing, or be looked at on demand via a set-top box or streameBookmark this sited on the internet. The first television set shows were experimental, sporadic broadcasts viewable only within an extremely short range from the broadcast tower starting in the. Televised events such as the 2020 Summer OlyBookmark this sitempics in Germany, the 2020 coronation of King George VI in the UK, and David Sarnoff’s famoThe War with Grandpa introduction at the 9 New York World’s Fair in the The War with Grandpa spurreBookmark this sited a rise in the medium, but World War II put a halt to development until after the war. The 2020 World Movies inspired many Americans to buy their first tv set and in 2020, the favorite radio show Texaco Star Theater made the move and became the first weekly televised variety show, earning host Milton Berle the name “Mr Television” and demonstrating that the medium was a well balanced, modern form of entertainment which could attract advertisers. The firsBookmBookmark this siteark this sitet national live tv broadcast in the The War with Grandpa took place on September 4, 2020 when President Harry Truman’s speech at the Japanese Peace Treaty Conference in SAN FRAElite CO BAY AREA was transmitted over AT&T’s transcontinental cable and microwave radio relay system to broadcast stations in local markets. ✌ FINAL THOUGHTS ✌ The power of faith, love, and affinity for take center stage in Jeremy Camp’s life story in the movie I Still Believe. Directors Andrew and Jon Erwin (the Erwin Brothers) examine the life span and The War with Grandpas of Jeremy Camp’s life story; pin-pointing his early life along with his relationship Melissa Heing because they battle hardships and their enduring love for one another through difficult. While the movie’s intent and thematic message of a person’s faith through troublen is indeed palpable plus the likeable mThe War with Grandpaical performances, the film certainly strules to look for a cinematic footing in its execution, including a sluish pace, fragmented pieces, predicable plot beats, too preachy / cheesy dialogue moments, over utilized religion overtones, and mismanagement of many of its secondary /supporting characters. If you ask me, this movie was somewhere between okay and “meh”. It had been definitely a Christian faith-based movie endeavor Bookmark this web site (from begin to finish) and definitely had its moments, nonetheless it failed to resonate with me; struling to locate a proper balance in its undertaking. Personally, regardless of the story, it could’ve been better. My recommendation for this movie is an “iffy choice” at best as some should (nothing wrong with that), while others will not and dismiss it altogether. Whatever your stance on religion faith-based flicks, stands as more of a cautionary tale of sorts; demonstrating how a poignant and heartfelt story of real-life drama could be problematic when translating it to a cinematic endeavor. For me personally, I believe in Jeremy Camp’s story / message, but not so much the feature. FIND US: ✔️ https://cutt.ly/RnXpUrx ✔️ Instagram: https://instagram.com ✔️ Twitter: https://twitter.com ✔️ Facebook: https://www.facebook.com
https://medium.com/@elite-4x01-s4-episode-1/elite-4x01-s4-episode-1-on-netflixs-2021-189a784390b3
['Elite', 'Episode', 'On', "Netflix'S"]
2021-06-17 07:27:46.650000+00:00
['Politics', 'Technology', 'Covid 19']
439
Where Are All The Time Travellers?
Where Are All The Time Travellers? With the billions of people who’ll exist in the future and have access to time travel — why haven’t you met one? In the future — Time Traveling will be used by three main industries: Tourism: 4 Dimensional Capitalism. Hawaiian shirt-wearing tourists from the future visit various times and places in the past to check things out for a couple of weeks. It’ll be the same set of people you see in regular travel today: honeymooners, backpackers, and contiki-esque party people 4 Dimensional Capitalism. Hawaiian shirt-wearing tourists from the future visit various times and places in the past to check things out for a couple of weeks. It’ll be the same set of people you see in regular travel today: honeymooners, backpackers, and contiki-esque party people Research: Historians, archeologists, and high school students on field trips will venture into the past to learn more about what happened and why — hopefully without stepping on a butterfly along the way Historians, archeologists, and high school students on field trips will venture into the past to learn more about what happened and why — hopefully without stepping on a butterfly along the way Crime Fighting: Law enforcement agencies will stop crime before it happens, as well as hide people in witness protection programs — not in safe houses, but in safe times Over the long term — human populations on Earth will stabilise at around 9 Billion people. That’s a lot of potential users of time travel technology…which then begs the question: If there are so many potential time travellers — where the heck are they all? There are five reasons for this: №1: They’re not visiting your time period. Look back on all of human history — what would be the most interesting time and place you can think of visiting? What about the top 10 times and places? Be honest. Do the years of your lifetime feature in that list at all? If it does— is wherever on earth you’re living the most worthy place to check out? Thought not. Chances are you’re just not living in a particularly noteworthy point in history, and thus have never crossed paths with a time traveller. №2: They’re hiding in plain sight Revealing themselves would create all kinds of buttery fly effect type problems — irrevocably altering the future. As such, your typical time traveller is likely kept on a very tight leash. Revealing themselves would probably void their “Time-travel passport”, not to mention any fines or criminal prosecution they’d have to face. Further — the time police would no doubt interject just as the recalcitrant traveller was about to reveal themselves. In light of this and given they have technology that literally allows them to travel through time and space — they’ve probably got techniques to not draw the attention of the iPhone wielding neanderthals of the early 21st century. It could be as simple as having clothes to blend into our time period, or cloaking technology to totally remain out of our sights. №3: You think they’re crazy Every so often — people come forward who claim to be time travellers. They seem out of place (wouldn’t you be if you’re walking around in a different century?). We tend to laugh and dismiss them as mentally unhinged. Well — what more do we want from them? They’re damned if they do, damned if they don’t! Credit: HBO’s Westworld TV series via reddit.com №4: People experience the past without leaving their own time In the same way we can play open-world games set in different periods of time (Assassins Creed, and Red Dead Redemption come to mind) today — people in the future are able to explore the past in enhanced virtual reality, or by making visits to various theme parks (as depicted in Westworld). Thus, they are effectively experiencing other time periods without ever leaving theirs №5: Time travel doesn’t happen It could be that we never get invent the necessary technology, usage of time travel is prohibited, or humanity itself has long since been wiped out. Regardless of the reason — time travel does not happen, and thus we never encounter time travellers.
https://medium.com/predict/where-are-all-the-time-travellers-10ccd70f2459
['Kesh Anand']
2019-10-02 12:57:12.634000+00:00
['Science Fiction', 'Science', 'Future', 'Technology', 'Time Travel']
440
How Virtual Care Technology Is Redefining The Healthcare Industry
The recent pandemic has inspired the healthcare industry to introduce devices that are contactless. FREMONT, CA: there’s little question that thanks to the recent pandemic of COVID-19, the word of healthcare will witness a drastic transformation within the way care is provided to the patients. With telehealth becoming the new norm and therefore the rapid climb of virtual care, the healthcare sector has given rise to the newest digital health tools. The latest digital health applications, just like the self-triaging and get in touch with tracing for COVID-19, have offered new opportunities to start-ups and large farms. Since healthcare is transforming its delivery model, telehealth and virtual care is merely expected to grow within the future. The reality of telehealth during COVID-19 With time, the panic that was there among people thanks to the pandemic is subsiding to an extent. The enterprises that developed virtual care are looking forward to an appropriate platform which will be used for enterprise-level scaling. See Also: Healthcare Business Review In such a drastic situation, the healthcare sector was allowed to use consumer technologies like Skype. These health systems are now carefully evaluating their platforms’ choices in order that they will use it for his or her benefit. However, for the patients’ telehealth has become the new reality within the healthcare sector to urge healthcare regularly. Therefore, several related technology trends are gaining demand, and here are a number of them. The increasing demand for contactless experiences With the increase of a natural or human-made calamity, various daily-basis practices can change. within the case of COVID-19, citizenry are afraid to the touch any surface that’s exposed to the general public . Therefore, people are more scared of visiting the hospitals or clinic thanks to which healthcare executive while taking appointments have began to check people during a way which they experience in airports, and this is often finished both in-person and even virtual visits. Now the patients have the freedom to satisfy their registration formalities before visiting thanks to the technology-enabled workflow. The registration booths are using face recognition software in order that they are doing not need to touch any surface. Routine examinations also are conducted virtually because the process used for diagnostic are often done through remotely controlled tools. Even the caregivers visit their patients through virtual platforms, and this trend will only increase within the future.
https://medium.com/@healthcarebusinessreview/how-virtual-care-technology-is-redefining-the-healthcare-industry-cc2bb15a49c8
['Healthcare Business Review']
2020-12-22 07:04:15.740000+00:00
['Facial Recognition', 'Virtual Care', 'Technews', 'Technology', 'Healthcare']
441
How One Pivotal Designer Escaped the Export-upload Abyss
The developer-designer handoff is an imperfect thing. In theory, a designer finishes a UI and sends it to the engineer to code, like a whistling baseball pitch that’s then bashed into the outfield. In reality, the process looks more like a volleyball game. Back and forth the files fly through the Interwebs as both parties cross their fingers they don’t accidentally work off an outdated document. This was the experience of the design and engineering team at Pivotal, the software and services company based out of San Francisco. Colby Sato, one of Pivotal’s Product Designers on their Platform Monitoring team, would spend days managing the developer documentation on each project. “You’re never done designing — things are always changing because you’re always getting fresh information,” Colby said. Days spent on dev documentation To keep the engineers in the loop about design updates, Colby would export the design frames as images, then upload those images to engineering cards in Pivotal Tracker (Pivotal’s project management tool). That happened 15–20 times per project. Left: A Pivotal Tracker card with static images of designs; Right: A Pivotal card with an always up-to-date Figma link The engineers’ used hundreds of cards for each project, so Colby would have to spend hours searching for the right cards. Sometimes, he would occasionally forget to update an image and engineers would work off the wrong file. “It loses time for the developer and it’s embarrassing,” Colby said. The Figma way Then Colby’s manager introduced him to our tool, Figma. We’re a lot like Google Docs for design. Colby loves the fact that his Figma design files are in the browser, which means they’re always up to date, accessible from any computer, and shareable with a link (no software downloads necessary). The tool also has a commenting function for explaining how frames are supposed to function. Pivotal designer Colby Sato works with product manager Melanie Matsuo “It’s not just a design tool — it’s a design management tool in a way,” Colby said. Once Colby used Figma, he no longer had to update the developer cards when designs changed — he just made sure to include the Figma link when the card was first created. “I don’t need to worry about the engineers going to the right file, I know they will,” Colby said. “It easily saves me two days of work per project.” Comments in context Figma also helped shorten Pivotal’s design reviews and make them more effective, according to Colby. At the company, engineers take part in design critiques weekly. They used to write down their notes on post-its and share aloud one-by-one, a slow, laborious process. “Post-it notes are without context — they’re like floating ideas,” Colby said. “People reading theirs out will drone on. But with Figma we can understand them better because the notes are attached to the design itself.” A Pivotal design in Figma, with pins showing comments While engineers reviewed the design, Colby could watch everyone’s activity in the file, following their cursors around. If developers focused on a part of the design Colby didn’t need help on — like a component that’s set by Pivotal’s style guide — he could prompt them in the right direction. “It gives the person receiving the critique more control over what they want feedback on,” Colby said. Ultimately, Figma eased the developer to designer hand off and sped up the design critiques. That made it easier for Colby to work with his counterparts at Pivotal. It gave them more time to focus on the quality of the work itself, instead of the logistics of tracking updates and feedback. “At Pivotal, we feel like a team that has input from everyone in developing a product is going to build a better product,” Colby said. “Figma enables that.”
https://medium.com/figma-design/how-one-pivotal-designer-escaped-the-export-upload-abyss-da9f2264ca92
['Carmel Deamicis']
2017-11-09 18:28:04.932000+00:00
['Technology', 'Programming', 'Design', 'Case Study', 'UX']
442
The Tech Humanist Manifesto
The Tech Humanist Manifesto We need to encode technology with the best of our humanity. Note: this manifesto generated so much interest and such encouraging feedback that it grew into a book published in 2018: Tech Humanist. Please buy, read, review, share, and keep spreading the word. After twenty-plus years of working in web technology, digital strategy, marketing, and operations, with job titles like “intranet developer,” “content manager,” “head of customer experience,” and even “search monkey,” and after writing a book on the integration of physical and digital experiences and now working on a book on automation and artificial intelligence, I have a difficult time describing to people what I do. So I’ve decided to declare myself a tech humanist. I have decided to declare myself a tech humanist. Because what I’ve realized is that data and technology in all their forms are becoming integrated ever more tightly into our lives and ever more powerful, to the point where the work of making technology successful for human use is inseparable from the work of making the world better for humans. I would even argue that the work of making better technology needs to be in lockstep with the work of being a better human. And no, I didn’t grow up wanting to be a tech humanist. I mean, it’s not like I read science fiction as a kid and thought someday I would think, write, and speak about the emerging impact of data and technology on human experience. I was a German major. I still don’t read science fiction now as an adult, by the way, although I do see the connection between the work that I do and that genre’s exploration of technology and culture. Is this what science fiction looks like? image via https://pixabay.com/p-1677542/ It’s just that I’ve always preferred stories that explicitly examine human relationships. Because what interests me most is always people: we’re such complicated systems of nerves and emotions and thoughts and impulses. We’re self-aware animals, pondering our own existence, conscious of our place in the universe. (Not always conscious enough, but still.) Cosmic primates. I do think technology is endlessly fascinating. But I’m even more fascinated by humans and our endless complexity and capacity for nuance. Which means when it comes to any aspect of technology, what I care most about are the people who make the technology, the people who use the technology, the people who benefit from the technology and the people who suffer for the technology, the people whose lives may somehow be changed by the technology. What I care most about are the people whose lives may be somehow changed by technology. And it’s not because we use technology. In other words, it isn’t just the tools. Ravens use tools. So why am I not, say, a tech ravenist? Unless we find out about other intelligent species with technology in the universe, humans are the best identifiable link between the dominant technology and the rest of organic life on this planet and beyond. So our best hope for aligning the needs of all living things and all technological progress is in our own human enlightenment. Our best hope for integrating the needs of all living things and all technological progress is in our own enlightenment. We need technological progress. It will surely bring us cures for disease, interplanetary and someday even intergalactic travel, safe and efficient energy, new forms and modes of communication, as well as so much else. But for our own sake, and for the sake of humans who come after us, we need to wrap that progress around human advancement. And to do that, we need to foster our own enlightenment. We need a more sophisticated relationship with meaning and with what is truly meaningful, at every level: in communication, in pattern recognition, in our relationships, in our existence. To develop technology in harmony with human advancement, we need to challenge our basest instincts and channel our best intentions. We need to genuinely want and be committed to creating the best futures for the most people. We need to want the best futures for the most people. Because the fact is we encode our biases into data, into algorithms, into technology as a whole. So as we develop an increasingly machine-driven future, we need to encode machines with the best of who we are. And in that way, infuse the future with our brightest hope, our most egalitarian views, our most evolved understandings. We need to recognize the humanity in the data we mine for profit, to see that much of the time, analytics are people. That everything we do in the digitally and physically connected worlds creates a data trail. That who we project ourselves to be online — that self, that digital self—is our aspirational self, liking things and connecting with other people and wandering through the digital world in awe, and our aspirational self, our digital self deserves due privacy and protection in every way. We need to recognize the humanity in the data we mine for profit. We talk about “digital transformation” in business. But let’s be honest: most corporate environments are anything but transformative. So we need to begin to re-imagine and yes, transform business operations and culture around new models of infrastructure, new understandings of the social contract between employer and employee, and fundamentally new ideas of value. Tin wind-up robots,via Wikimedia Because our world is increasingly integrated: online and offline, at work and at play, and we have to be wholly integrated selves, too. And so we have to ask what the exchange of value means when it’s about an integrated you in an integrated world. We need to decide, for example, when we talk about autonomous cars: whose autonomy are we talking about? What are the broader implications of gaining freedom while losing control? Evolving from a society of private automobile ownership to privatized fleets of self-driving cars will give us back time, won’t it? Or will it? And yes, it will mean life-changing possibilities for disabled and elderly people. If they can afford it. All in all, as anyone dependent on the New York City subway knows, if our mobility depends on machines we don’t own and don’t directly control, we are making a tradeoff. It may be a worthwhile tradeoff, it may even be an exciting tradeoff, but it is a tradeoff and we should ask meaningful questions about it. We need to know that living in a culture with an ever-accelerating sense of time might mean having to resist an ever-narrowing horizon. That we have to try not to lose our sense of greater perspective in the FOMO frenzy. That our sense that experiences aren’t real unless we share them and receive a few likes (or preferably a lot of likes) could cost us some peace of mind. We need to begin to re-imagine our lives around new dimensions of meaningful experience. And ask ourselves: What different dynamics come into play when relationships are conducted across physical distances but connected by intimate virtual space, and what can make those relationships more meaningful. What fosters communities when they’re multi-faceted network nodes, and not found mostly in houses of worship and town squares, and what will make those communities more meaningful. What “what we do for a living” will mean as jobs shift, as our understanding of contribution changes, and what will make that contribution more meaningful. Because so much of the way we’ve derived our identity, our sense of accomplishment, achievement, contribution, value, self-worth, is subject to radical overhaul in the next decade and the one following that and beyond. More jobs will be automated, augmented, enhanced, and yes, eliminated. And certainly new jobs will be created, but we can’t wait for them to make sense of this. We have to begin re-imagining now what meaningful contribution looks like in that context. So we need to ask what it means to be human when the characteristics we think of as uniquely ours can be done by machines. What is the value of humanity? We need to ask what it means to be human when the characteristics we think of as uniquely ours can be done by machines. And see, it’s not that I’m a human exceptionalist, exactly. I’ve been vegan for 20 years, after all, which I point out to illustrate that I don’t think rights are something only humans deserve. And eventually if I’m around when machines become sentient, I’ll probably care about AI rights and ethics, too. I can’t help it: I’m a sucker for equality. So it’s not that I think humans are so special that we deserve protecting and coddling, except that… just maybe we are, and just maybe we do. I just think that whatever it is — humanity — it’s worth advocating for. Whatever combination of fascination and flaws it was that produced Shakespeare, Gloria Steinem, Paris, pizza, the Brooklyn Bridge, beer, Nelson Mandela, denim, Mary Tyler Moore, coffee, chocolate chip ice cream… I could go on and on, but I don’t even know if any of that is really the best of humanity, or even the best of what humanity has achieved. And what lies ahead of us are even greater challenges. So I don’t know what the best of humanity has been and at some level I don’t really care. I just think we have to be at our best now. And somehow striving for our best, somehow making something lasting, and most of all working to make the best future for the most people — I think that is the best of what humanity can be and has to be. And we need to start making it our mission to give it, to be it, to encode it, to build it in our culture, in our data models, our work environments, our relationships, and all throughout the technology that is interwoven in our lives. It’s not science fiction; the future really does depend on it. ___ Thank you for reading. Please “clap” if you found this piece interesting or meaningful. And please feel free to share widely. Also please note that this manifesto generated so much interest and such encouraging feedback that it grew into a book published in 2018: Tech Humanist. Please buy, read, review, share, and keep spreading the word. Kate O’Neill, founder of KO Insights, is an author and speaker focused on making technology better for business and for humans. Her work explores digital transformation from a human-centric approach, as well as how data and technology are shaping the future of meaningful human experiences. Her latest books are Tech Humanist: How You Can Make Technology Better for Business and Better for Humans (2018) and Pixels and Place: Connecting Human Experience Across Digital and Physical Spaces (2016).
https://medium.com/intuitionmachine/the-tech-humanist-manifesto-bf9ebaa1e45f
["Kate O'Neill"]
2018-11-25 23:34:41.420000+00:00
['Future', 'Humanity', 'Artificial Intelligence', 'Future Of Work', 'Technology']
443
How Handlebars Helps Power Content Metadata Syndication
Photo by asoggetti on Unsplash Context The Content Delivery Engineering (CDE) team is responsible for the delivery of video metadata (titles, descriptions, images, cast/crew, rights/restrictions, playback URLs, etc.) to external destinations outside of Disney Streaming Services (3rd party catalogs, ad exchanges, partner CMS’s, etc.).When our team was tasked with building a new application in support of that charter, we aimed to build it in a way that would offer the least amount of maintenance and overhead as possible. This includes being able to support most new syndication use cases without the need for any code changes. Additionally, we wanted to avoid writing our own templating language but rather use an existing, proven framework to power our templates. Lastly, each partner had different requirements around the data they wanted delivered which required lots of custom logic. We wanted to ensure this custom logic for each partner did not exist in our code base but rather was contained in each partner’s particular template. Enter Handlebars! Handlebars to the Rescue! Handlebars.js is a powerful templating engine that allows you to write templates to transform data into arbitrary formats. It is built on the Mustache Templating Language but offers additional functionality. Features such as evaluation of conditional statements and array iteration are extremely helpful when building out templates. One of the limitations is that Handlebars is a javascript framework and comes packaged as a npm module. Our application was designed to be a Scala application with multiple components. The requirements laid out for our application were: Deliver content metadata to external systems in various formats required by those systems . A concrete example of this is the need to syndicate video metadata to an ad server to enable relevant ad serving in our ad-supported products. . A concrete example of this is the need to syndicate video metadata to an ad server to enable relevant ad serving in our ad-supported products. Be able to support json, xml and plain text outputs . Currently we have approximately 20 different feeds we deliver to external partners, all with different formats. . Currently we have approximately 20 different feeds we deliver to external partners, all with different formats. Push the final output to a service endpoint, S3 bucket or Kinesis stream. In addition to push-based protocols, we needed to be able to stand up HTTP endpoints from which consumers could pull data. Solution After we decided that Handlebars was the framework of choice, we began looking for a way to integrate it into our Scala project. We discovered this Scala library which allowed us to leverage most of the basic helpers which makes Handlebars so powerful. It currently implements version 1.0.0 of the original Javascript version. The library also lists what is not supported compared to what is supported by the original Javascript implementation here. Below is an excerpt of one of our templates using a few of the basic handlebars helpers: {{#if data.photos}} <bam:Images> {{#each data.photos}} <bam:Image xlink:href="{{uri}}" height="{{height}}" width="{{width}}" type="image/jpeg" key="{{imageKey}}"/> {{/each}} </bam:Images> {{/if}} The above template is checking if the photos array exists. If it does, it will create a <bam:Images> object. Then, using the #each helper, it will iterate through the photos array and add the values in the Photo object to the <bam:Image> attributes. If we tried writing code to handle this logic in our application directly, we would end up with lots of bespoke logic based on the partner’s desired output. Using templates with helpers as illustrated above allows us confine all this logic to a particular template for each partner. Here is the Scala code for the #if helper in the Scala library: def `if`(obj: Object, options: Options): CharSequence = obj match { case it: Iterable[_] => if (it.isEmpty) options.inverse() else options.fn() case _ => IfHelper.INSTANCE(obj, options) } The photos field is an array so it’ll match on the initial case, then it does a simple check if the array is empty or not. If it is empty, it’ll return options.inverse() which actually tells the template to use the value that the template has defined in the “else” block. Else it’ll call options.fn() which tells the template that the condition evaluates to true and to use the value in the “if” block. Finally, the output of the above block will be: <bam:Image key="" type="image/jpeg" width="133" height="200" xlink:href=" </bam:Images> https://images.unsplash.com/photo-1530143584546- 02191bc84eb5 "/> We Need More! In our effort to remain flexible and stick to the basic tenet that as much custom logic as possible should live in the templates instead of the application code base, we had to begin extending the referenced library with our own helpers. A good example of this is the custom #in helper we created. The logic it applies is if any of the values specified the comma delimited string exist in the field specified, then it should evaluate to true. Here is an example of the #in helper in use: {{#in system "FCC-TVPG (USA), TVPG"}} <advisory systemCode="us-tv"/> {{/in}} In the above, it is simply checking if the field system contains any one of the values specified. If it does, it should continue into the block below and add the advisory element into the output. Here is the Scala code we wrote to support this #in helper: def in(sourceObject: Object, field: Object, values: Object, options: Options): CharSequence = (sourceObject, field, values) match { case (str: String, list: String, _) => if (list.split(", *") contains str) options.fn() else options.inverse() case (sourceObject, field: String, _) => in(Try(sourceObject.toString).getOrElse(""), list, null, options) case _ => options.inverse() } The logic is pretty straightforward and writing this single helper has helped us immensely as it is used in the majority of our templates. Of course as we continue to create more templates, we add additional helpers as needed. A slightly more complex helper would be our #dateCompare helper which allows us to check if a date field is before, equal or after a certain date we specify. Below is an example of it in use: {{#dateCompare startDate "isBefore" "2019-01-31T00:00:00Z"}} <matchTime="{{startDate}}"/> {{/dateCompare}} The field we’re comparing on the source object is the startDate . If the start date is before 2019–01–31T00:00:00Z then we can go ahead and set the matchTime element. As you notice, we have an additional argument in our expression compared to the #in helper. This allows us to specify whether we want to use isBefore , isEqual or isAfter . The key to this is in our Scala helper function, we just needed to add additional arguments in our function signature. In fact, you can specify any amount of arguments you want in the any Scala template helper you write. The code for this particular helper is below: def dateCompare(dateObject: Object, operatorObject: Object, compObject: Object, dateFormatObject: Object, options: Options): CharSequence = { val (dateValueOption, compValueOption): (Option[LocalDateTime], Option[LocalDateTime]) = (dateObject, compObject, dateFormatObject) match { case (dateField: String, compDateField: String, format: String) => (DataExtractor.getDate(dateField, format), DataExtractor.getDate(compDateField, format)) case (dateField: String, compDateField: String, _) => (Try(Some(LocalDateTime.parse(dateField, DateTimeFormatter.ISO_DATE_TIME))).getOrElse(None), Try(Some(LocalDateTime.parse(compDateField, DateTimeFormatter.ISO_DATE_TIME))).getOrElse(None)) case _ => (None, None) } val result: Boolean = (dateValueOption, operatorObject, compValueOption) match { case (Some(dateValue), "isAfter", Some(compValue)) => dateValue.isAfter(compValue) case (Some(dateValue), "isBefore", Some(compValue)) => dateValue.isBefore(compValue) case (Some(dateValue), ("isEqual" | "==" | "equals"), Some(compValue)) => dateValue.isEqual(compValue) case _ => false } if (result) { options.fn() } else { options.inverse() } } The code block above allows us to compare whether a date in the source object isBefore, isAfter or isEqual a date that we specify. It also allows us to specify a date time format. If nothing is passed, it defaults to use standard ISO Date time format. Below are just a few more of the custom helpers we have implemented: #in as referenced above as referenced above #notIn which is simply the inverse of the #in helper which is simply the inverse of the helper #startsWith which is used to match on any string starting with a specified prefix which is used to match on any string starting with a specified prefix #xmlDuration which is used to convert a duration in milliseconds to xsd:duration format which is used to convert a duration in milliseconds to xsd:duration format and several more! From the list of helpers above we can see that there is wide range of functionality that can be added as a handlebar helper. This allows us to use the same helpers across handlebars templates regardless of the output format we’re building. Putting it all Together Now that we have created all the helpers we need and created templates that match the requirements outlined by our partners, how do we put it all together to actually produce the expected output? The Scala library we’re using is written on top of this Java implementation. To get started, we simply create a new instance of the Handlebars object. val handlebars = new Handlebars() Then we can go ahead and start registering your helpers. The Scala library offers a few Scala Helpers, and of course we want to add the ones we created. We can do that by chaining the registerHelpers function to the above: val handlebars = new Handlebars() .registerHelpers(ScalaHelpers) .registerHelpers(inject[CustomHelpers]) Now that we have our handlebars instance, we can begin building outputs. First thing we do is actually pass in the handlebars template we created: val template = <some handlebars template> val handlebarsTemplate = handlebars.compileInline(template) where template equals: {{#if data.photos}} <bam:Images> {{#each data.photos}} <bam:Image xlink:href="{{uri}}" height="{{height}}" width="{{width}}" type="image/jpeg" key="{{imageKey}}"/> {{/each}} </bam:Images> {{/if}} Now that we have our actual template, we can pass in the source data which in our case would be json, and voila, we’ll have our output. def ctx(obj: Object) = Context.newBuilder(obj).resolver(Json4sResolver, MapValueResolver.INSTANCE).build val output = handlebarsTemplate(ctx(contentJson)) where contentJson equals: { "photos":[ { "uri":"https://someurl1.com/", "height":608, "width":1920 }, { "uri":"https://someurl2.com/", "height":1024, "width":780 } ] } The ctx function defined above creates the context stack for the template, this makes all elements in the source objects available for the template. We then use this function and pass it into our Template object. This then merges the compiled template with the source context and voila, we have a fully built output! The following is the final output that will return from our call to merge the compiled template to the source context. <bam:Images> <bam:Image xlink:href="https://someurl1.com/" height="608" width="1920" type="image/jpeg" key=""/> <bam:Image xlink:href="https://someurl2.com/" height="1024" width="780" type="image/jpeg" key=""/> </bam:Images> Where are we now? As our application has continued to mature, we are now at a place where adding a new integration is as simple as creating a new Handlebars template and mapping the source data to the desired output. This has allowed us to introduce new use cases at a rapid pace without the need to write custom syndication applications for each. Our application plays a crucial role in the Disney Streaming Platform and the use and customization of Handlebars has made it relatively easy to manage our growing consumer base.
https://medium.com/disney-streaming/how-handlebars-helps-power-content-metadata-syndication-86957c06b365
['Steve Yacoub']
2019-02-19 09:53:01.345000+00:00
['Technology', 'JavaScript', 'Scala', 'Handlebars']
444
FBI Document Shows How Popular Secure Messaging Apps Stack Up
An FBI document lays out the information various secure messaging apps can share with law enforcement. By Nathaniel Mott It can be hard to decide which secure messaging app to use. Luckily, a newly leaked document that was reportedly prepared by the FBI’s Science and Technology Branch and Operational Technology Division makes it easy to see what kinds of information various services can provide in response to requests for user data. Rolling Stone reports that the leaked document was prepared on Jan. 7. It’s titled “Lawful Access,” and according to its header, it describes the “FBl’s Ability to Legally Access Secure Messaging App Content and Metadata.” The document is unclassified, but it’s alternately designated as “For Official Use Only” and “Law Enforcement Sensitive.” “As of November 2020, the FBI’s ability to legally access secure content on leading messaging applications is depicted below, including details on accessible information based on the applicable legal process,” it says. “Return data provided by the companies listed below, with the exception of WhatsApp, are actually logs of latent data that are provided to law enforcement in a non-real-time manner and may impact investigations due to delivery delays.”
https://medium.com/pcmag-access/fbi-document-shows-how-popular-secure-messaging-apps-stack-up-9102549bf91c
[]
2021-11-30 14:09:55.863000+00:00
['Technology', 'FBI', 'Apps', 'Messaging']
445
The OBEY sign from They Live
The OBEY sign from They Live The COVID-5G tower conspiracy theory and the typography in John Carpenter’s movie It a testament to the seriousness of the COVID crisis that both Facebook and Youtube have banned conspiracy theorist David Icke, despite his immense popularity. For platforms like Facebook and Youtube engagement (the depth and time an average user spends viewing their content) means more advertising $$$. Mark Zuckerberg, in particular, has stated that Facebook should not fact-check politician’s claims. While this is seemingly in the service of free speech, there is slight conflict of interest here. Fake, outlandish clickbait and outright lying is a far better source for revenue than boring facts. Our brains are hard-wired to engage more with nonsense, than we do with real information — the same way junk food tastes better than a salad. Google’s Youtube is equally guilty of preferring popular content over social purpose. The platforms, as a consequence, are awash with all kinds of false information, xenophobia and hate speech. The only God here is money. David Icke had almost a million Youtube subscribers, and his videos discussing various conspiracy theories, it is estimated, cross over 30 million views across social media. His last video, about getting kicked off Facebook, had received 120,000 thousand hits, before Google pulled the plug on him, too. Icke’s theory fell afoul of new rules which specifically disallow content that claims the virus does not exist or offers false, medically unsubstantiated advice about the virus. Icke is responsible, along with other odd-balls and cranks, for spreading the idea that coronavirus symptoms were somehow caused by radiation from 5G telecommunication towers. In the UK, Icke’s home country, there have been over 77 arson attacks on phone tower masts and counting. This theory is now spreading in the United States and other parts of the world. Icke, probably the world’s premier conspiracy theorist, is no stranger to controversy. His book, The Truth Shall Set You Free, was even recommended by Pulitzer-winning author, Alice Walker in The New York Times. The problem, however, is that Icke cites The Protocols of the Elders of Zion repeatedly through his work and with approval. Unfortunately, no matter how amusing or even insightful Icke’s conspiracy-mongering can be, The Elders of the Protocol of Zion is the primary text of anti-semitism. Much more important is that it is a completely fraudulent, forged document. Its principal purpose is the targeted defamation of Jews in order to prejudice public opinion against them. From Russia to Germany, this “book” has helped fan the flames that have led to murders, pogroms and genocide. In fact, The Protocols is the prescribed textbook of anti-semitic feeling. It has inspired everyone, from Adolf Hitler to the grass-root skin-head on his way to his local synagogue with a petrol bomb in hand. Icke’s ban, along with his work, is generally discussed in the context of free speech and its limits. However, the matter, as we will see in this essay, is much more complex than a simple disagreement over what kind of material should be allowed to be circulated in the public domain. The vulgarisation of high art The Protocols of the Elders of Zion is plagiarised from an earlier work by a French writer named Maurice Joly. The best book on this subject is a graphic novel by Will Eisner. © Will Eisner Maurice Joly was a 19th century lawyer and polemist who was particularly angry with the government of Napoleon III. Despite a ban on publications critical of the monarch, Joly liked to write exactly those kinds of pamphlets. While a monarchist and a conservative, Joly felt Napoleon III was a tyrant who was not respecting the limits placed on his powers by the French constitution of that time. Most of his books, put out by subversive editors, were destroyed. In a review of Umberto Eco’s The Prague Cemetery, Rebecca Newberger Goldstein writes: The story of the “Protocols” is rendered even stranger by the labyrinthine history of plagiarisms and hoaxes that went into its making, and it is this astounding back story that Eco fictionalizes. One of the plagiarized sources is an 1864 French political pamphlet, satirizing Napoleon III, entitled “Dialogue in Hell Between Machiavelli and Montesquieu.” The author, Maurice Joly, who spent 15 months in jail for his efforts, attacks the legitimacy of the emperor by showing plotters in hell undermining a rightful regime. Roughly two-fifths of the “Protocols” so closely parrots Joly’s wording that there is little doubt of the borrowing. Joly, in turn, had plagiarized a popular novel by Eugène Sue, “The Mysteries of a People,” which presented the schemers as Jesuits. These sources are predated by a late-18th-­century best seller, “Memoirs Illustrating the History of Jacobinism,” by the French cleric Augustin Barruel, who charged that behind the French Revolution lurked a conspiracy of Freemasons. In other words, conspiracy theory is not a new phenomenon. The idea of who is conspiring has changed over time: Jesuits, Freemasons, Jews or lizard-people aliens, depending on the political goals of the writer. Joly’s narrative of a conspiracy targeting Napoleon III’s administration was gutted and filleted by Russian propagandists, and then deployed at the Jewish community. The point here is the vulgarisation of intellectual criticism. Joly was trying to hold the government of his day to account, and promote a just society. Instead, his work helped motivate pogroms and genocide. The medium is cable television Similary, Icke’s ideas about an alien-lizard race invading and living amongst humans as their rules is hardly original. To my mind, the plot seems a straight lift from the John Carpenter film, They Live. In fact, Icke has spoken in glowing terms about the film. Obviously, when I first saw They Live as a teenager, all I saw was a fun action-movie. Proceedings involved a large Caucasian male. He had a blonde mullet, wore a pair of prominent shades and a lumberjack shirt. He proceeded to blow out the brains of the alien-monsters with a pump-action shotgun. Moreover, I had found my beloved Duke Nukem 3D’s source for the line: “here to kickass and chew bubble gum, and I’m all out of bubble gum.” However, almost a decade later, I discovered, to my considerable delight that Slavoj Zizek uses They Live as a principal text to explore the concept of ideology and how it shapes politics and culture in his documentary, The Pervert’s Guide to Ideology. Ideology is an important concept in this documentary because the subject is, literally, the concept of ideology (look carefully, it’s in the title). Zizek summarises the plot of They Live and explain his theory of ideology (directed by Sophie Fiennes) Those interested in political theory that describes how elites use ideology to justify social inequality can consult Thomas Piketty’s latest book: The signage typefaces in They Live: A background As Toshi Omagari puts it: “They Live is among the best films that use typography for storytelling.” Many films, particularly of the socialist realism genre, attempt to educate us about the silly but effective tricks elites use to con masses. These movies can be boring as hell. Where Carpenter succeeds, and the empanelled writers of Pravda failed, is the inventive trick he uses to talk about what Karl Marx would call: “class consciousness”. When the magic sunglasses are worn, the signal broadcast by the aliens ceases to have effect. This not just reveals the aliens, who in this world are ordinarily camouflaged by the broadcast and walk freely amongst humans without being noticed, but the propaganda they use to keep humans blinded and compliant. With the sunglasses on, money is now seen to be simply a piece of paper that says: “THIS IS YOUR GOD” Similarly, signage and billboards issue other commands. A set of directions states: “NO INDEPENDENT THOUGHT”. A sign that you would normally expect to say “job vacancy” says “ CONSUME”. The most iconic of these commands is now is the “OBEY” advert in a magazine. John Carpenter’s signature typeface for credit titles in his movie is Albertus. Rumsey Taylor’s linked article refers to an amazing coincidence. There is a road in London that has a sign that says “John Carpenter Street”. It is in Albertus because that is the official typeface of the City of London council. As the article puts it: “John Carpenter was a 14th century figure and has no connection with the director, and the films precede the Albertus branding of the Corporation.” A curious story emerges from the message boards and comments on articles discussing the typefaces in which the commands are printed. STAY ASLEEP and THIS IS YOUR GOD is set in Tempo. CONSUME is set in Twentieth Century as is NO INDEPENDENT THOUGHT (but in an ultrabold font). But what of OBEY? OBEY is the now the most famous of all of They Live’s alien command advertising and signage. The sign is now ubiquitous on the clothing of skateboarding teenagers in city squares around the planet. They Live: a brilliant piece of print art from Roughtrade Books OBEY is seen widely principally because the sign has been expropriated into a clothing brand by street artist Shepard Fairey. OBEY clothing claims to be “manufacturing dissent” (a wink, scholars trained in Critical Theory 101 will recognise, to Noam Chomsky). Fairey is also responsible for disseminating the “Andre the Giant Has a Posse” sticker and logo into the streetscape which is also seen in many places. Artists like Fairey are “subvertisers” — artists who are interested in hijacking the messaging idioms and platforms of mainstream advertising, and conducting media experiments, to transmit political ideas. One of the most creative examples is “Led by Donkeys”, a group of British subvertisers who regularly punk their government. Toshi Omagari’s article remains inconclusive on the exact typeface used to make the OBEY sign. Commenters have suggested that Classroom JNL by Jeff Levine is a good fit. However, Levine only released the typeface in 2009. They Live was made in 1988. A commentator, Florian Hardwig, offers the most plausible solution, quoting Levine: “A set of old die-cut cardboard letters and numbers used by teachers directly on bulletin boards or for tracing was the inspiration for Classroom JNL. In turn, these letters take their cue from typefaces such as Franklin and earlier wood type designs.” So, this clears up the mystery. The prop makers of They Live, film school graduates, must have used a fairly standard and commonly available set of film school stencils to make the OBEY sign — the same ones Jeff Levine used to make his typeface. It is easy to forget, in the age of computers, that typefaces can be hand-drawn or made from stencils. Propaganda and technology To offer an attractive service to advertisers, Google and Facebook build up a psychological profile of their users. In order to do so, they collect giant amounts of data about each individual. Apart from the damage done to an individual, by stealing their attention and focus, this would not be a wide social problem. However, this psychometric profile has become of utmost interest to a very specific kind of major advertiser: political strategists. Elections tend to be decided by thin edges. Political parties are mostly firmly established in certain heartlands. What decides an election are “bell weather” places: places which where a floating population of voters can shift the overall result. Political strategists are always on the lookout to unlock issues that can motivate people to vote (or, in Southern America, prevent people from voting). This is why they advertise heavily. But, as Brexit and Donald Trump’s successful run to be elected to the White House show us, a very new kind of vote-bank has been created: the conspiracy theorist. The psychometric data that Facebook has collected allows special interest groups to identify, for instance, people who believe (because of the tabloids) that Europe wants to prevent the English from having bananas with curves. They have weaponised this group into a vote-bank. Social Media and Big Data’s neat tricks have facilitated dangerous ideas to influence policy. This includes preventing children from being vaccinated and inoculation or climate change denial. We are seeing a serious deterioration of political rights and basic freedoms. In America, women are on the cusp of losing a once constitutionally guaranteed right to privacy and maternal healthcare (including, but not restricted to, family planning and abortion), while the British have lost their freedom to work and settle in Europe. Conspiracy theory was once a harmless, somewhat niche activity. An almost genteel pastime. This group, the tin-foil hat wearers, were a kind of helpless people who belonged to a Philip K. Dick milieu, and spent their time fixated on things like cryptozoology and crop-circles made by aliens. Before a professor at the notoriously puritan St. Xavier’s Mumbai filed a successful petition asking for a legal ban, there was actually nudity to be found on late-night TV in India. One of these programs for which I would stay awake was on TV6 Mockba — a Moscow-channel that would broadcast Playboy’s Red Shoe diaries starring David Duchovny (dubbed over in thick Russian). I had no idea when the broadcast would begin or if it would happen at all. So, often I ended up switching channels to Star Movies (beamed up from Singapore). Sandwiched between Shaw Brothers kung fu movies, They Live seemed to be on late-night loop. That most ultra of ultra conservative-capitalist businesses, the Rupert Murdoch media conglomerate was broadcasting a film about how broadcasting and the media use technologies of mass hypnosis so that the conservative-capitalist-media axis can dominate and rule. Some humble, low-level Murdoch cassette jockey was using his corporate gig to broadcast John Carpenter’s message to the freethinkers and prospective delinquents of Asia. Our man, a mole embedded deep within the structures of enemy’s bureaucracy, was getting out a message to us. Icke will not be the last false advertiser who guts and fillets powerful political criticism — whether of Maurice Joly or John Carpenter — for his own political message. That is why Slavoj Zizek argues that ideology can never be beaten, it can only be changed.
https://medium.com/fan-fare/the-obey-sign-from-they-live-bea1aa107a27
['Neel Dozome']
2020-10-02 16:20:54.401000+00:00
['Design', 'Technology', 'Typography', 'Visual Design', 'Film']
446
Staking 2.0: Taking SPoS One Step Forward to Enjoy Hassle-free Staking in the VSYS Ecosystem
18 November — V SYSTEMS’ Supernode Proof of Stake (SPoS) was designed with users in mind to encourage simple and fast staking process. As we continue to lay a foundation for blockchain mass adoption, we are introducing Staking 2.0 which allows all V SYSTEMS users to join the first wave of applications powered by V SYSTEMS. VSYS holders will be able to get not only VSYS as rewards, but also other tokens built on the V SYSTEMS mainnet. SPoS consensus mechanism is the backbone of the V SYSTEMS blockchain. As we keep advancing the scalability of the network, we continue to explore ways to support the growth of the ecosystem and encourage active participation in the blockchain network through staking. Staking 2.0 opens up possibilities for VSYS coin holders to participate in the development of the applications built on V SYSTEMS, realising the principal of “Stake as Power” in the latest revolutionary PoS. First Wave of V SYSTEMS-Powered Applications Our first project Tachyon Protocol will leverage 50 million X-VPN’s existing global users to become one of the leading decentralized internet protocols. As its IPX token will be available soon, we welcome the VSYS community to join in fruitful collaboration with the project through our Staking 2.0 initiative. In addition to the Tachyon Protocol, the V SYSTEMS team is also working with xCurrency and a Fortune 500 real estate group on DeFi application development. We are excited to bring onboard more community-facing and enterprise-level projects to cultivate a robust DeFi ecosystems. A Simple Guide to Staking 2.0 1. A portion of V SYSTEMS-powered tokens will be allocated to reward VSYS users who participate in staking 2. The system will distribute the pool of token to supernodes daily, for a specific time period (depending on the amount of tokens) 3. Supernodes will allocate the tokens to their community members according to the default rate/ rule they set for VSYS reward
https://medium.com/vsystems/staking-2-0-taking-spos-one-step-forward-to-enjoy-hassel-free-staking-in-the-vsys-ecosystem-6fc9c6e38686
['V Systems']
2019-12-17 03:21:22.195000+00:00
['Distributed Ledgers', 'Database', 'Cryptocurrency', 'Blockchain', 'Technology News']
447
Quantum Computing And The Meaning Of Life—Not Just ‘42’
But what exactly is quantum computing? To understand why it’s so incredible, one must look at the difference between a quantum computer and a regular computer. A regular computer works by switching millions of tiny transistors between 1 and 0, or “on” and “off”. The computer can only tell each transistor to either let an electric current pass or not. There’s no other way and no in-between. So a computer has to switch through the different combinations, one by one. First, it’s for example 1000101, then 0101101 and then 1100100. These three random numbers already represent 3 different setups and have to occur in order. The computer can not make all 3 of them simultaneously. And though coming up with these 3 will only take the computer a few nanoseconds, having to go through billions of combinations with a lot more numbers (transistors) involved, can quickly become a time-consuming effort. A quantum computer makes use of a physical phenomenon that takes place in the still quite mysterious quantum world. A so-called “qubit”, which replaces the traditional transistor and consists of a molecule that’s deliberately spun at incredible speeds by shooting it with lasers at pinpoint accuracy while keeping it suspended in a near-absolute-zero environment, will fall into a so-called superposition. Remember the transistor? It’s either 1 or 0. The qubit, however, can be either 0, or 1, or anything in between (meaning a little of both at the same time). It uses a quantum state, which basically means it’s everything and nothing at the same time. To describe it really simply: Instead of having to go through the three binary number examples one after the other, a quantum computer can calculate and display all three at the same time. Imagine the game where you put a little ping pong ball under one of three plastic cups and start switching the cups around. If you were to work like a regular computer, you’d lift them up one by one to find the ball. A quantum computer simply lifts up all three at the same time, finds the ball, and then acts as if it never lifted the two empty cups in the first place.
https://medium.com/illumination/quantum-computing-and-the-meaning-of-life-not-just-42-b1d638c6cdd0
['Kevin Buddaeus']
2020-09-06 02:46:13.153000+00:00
['Technology', 'Data Science', 'Future', 'Science', 'Life']
448
What I Learned From Watching 15 Video Streams in One Month as a Marketer
Digital Marketing Guide for Business What I Learned From Watching 15 Video Streams in One Month as a Marketer What to tick on your checklist if you plan to host a webinar The global lockdown stopped the world in the most literal sense, cancelling thousands of events, including business events, sports leagues, concerts, awards, and even postponing to 2021 many prominent events, including the Olympic Games, among many others. Photo by Sincerely Media on Unsplash But even in the face of natural adversity, marketing professionals and content creators found a new way to continue their plans and push through these unfortunate circumstances. With real events, conferences, shows, workshops, and gatherings vanishing overnight, many of them moved in the digital realm. Webinars, live streams, digital summits — April 2020 was filled with conference streaming, across countless industries. Video streaming for business is not a new concept. Webinars and other forms of live video shows have been around for years now, and are mostly used by companies, brands, and content creators to engage with their clients, customers, or following. Top reasons include: To generate sales leads To educate an audience on a particular topic To present a new product To onboard or train clients and employees The sudden (re)gained popularity of conference streams With many desk workers doing their jobs from home for many weeks now, a lot of companies and brands saw an unmissable opportunity to build their digital presence, and capitalize on people’s new-found free time. And what better way to keep people connected than through live video content? And everything is available for free, in the safety of one’s house, with the click of a button. “If I don’t host a webinar to keep my consumers/clients/audience entertained, then someone else will.” — Marketer thinking in the shower, late March 2020 During April 2020, I watched 15 webinars, for various reasons — I wanted to make the most out of my lockdown time and learn new things (I admit this sounds like a cliche), feel like I am a part of some kind of community again, because my boss made me, and ultimately, to pass time. And yes, watching webinars can be a fun activity, especially if you’re chatting with colleagues and friends about what’s happening on the screen. Here are the most important things to know if you plan to organise a webinar, digital summit, or live stream: 1) Getting people to engage can be hard, especially if they’re already bored People dozing off through conferences is not something new. And most business events on this planet are guilty of committing the sin of bringing onboard Linkedin-renowned keynote speakers. They spend 45 minutes stating truisms with enthusiasm without delivering ONE single new point of view on the debated matter. The beauty of the online medium? Rules no longer exist. No one can be forced to watch a 45 minutes session. There is no peer pressure, no one around to notice what you are doing, or to give you a disapproving look when you choose to pull out your phone and scroll through feeds instead of listening to what is in front of you. It’s just you, in the comfort of your home, wearing pyjamas, and looking for something that is not boring. And if it is, you will close the window within 3 seconds. What to do: Prepare a comprehensive and exciting agenda — make sure attendees fully understand the topics you plan to discuss. 15 genuinely interested people are more valuable than one 100 who will close the window within the first 5 minutes. Encourage attendees to send questions before the session, and make sure you cover the topics during the live webinar. Mention where you got the question from. It shows that you care and pay attention to what people want to talk about. Make sure you have a live chat/email address/social media account where you can receive questions as you go. Answer all of them, even if they seem basic. The chances are people joined to learn something, and you should help them achieve that goal. 2) Know your content, pick your battles Live streaming it’s not for everyone. Not every company, brand or creator should include this in their marketing plan, even if they have the means to do so. A bad webinar can negatively affect your reputation, make people rethink their opinions about you, and overlook your future efforts. This is not about the company size, prestige, or how much money one can afford to spend, but rather it revolves around two aspects: finding the right content and delivering it by asking the right people. Start with content, and think about the following questions: Was my topic already covered extensively in the media? What are the key messages which tend to repeat? Do I have a different angle to this? Can I talk about it for 45 minutes without repeating information? Have my customers ever expressed interest in the matter? You get the drill. Ask as many questions as possible, and be honest with yourself. Webinars can be time-consuming projects, with a lot of teeny tiny resource-eating tasks. Be smart about it and think things through before deciding. If you cannot find any value in your idea, consult with other people, do a survey, see if the interest is real. You’ll get a clear answer within the 24h. What to do: Choose a broad subject, and then narrow it down. Find an interesting angle and start the conversation from there. Go as in-depth as possible and avoid talking about flaky and ephemeral trends (unless that’s what’s what you want to present). Showing fresh stats or analyzing data can be exciting and educational. Select your speakers carefully. Make sure their expertise overlaps with your chosen topic (almost) perfectly. Watch videos or previous keynotes they delivered to ensure they are coherent, fluent, and have an unusual take on your desired topic. Don’t fall for people with Keynote Speaker in their LinkedIn description — Mark Ritson explains for Marketing Week why. Keep the promotional part to a minimum. Feel free to add a call to action at the very end, but otherwise stick to the key messages and make an effort to present it in an easy to follow manner, using supporting evidence. If they pressed the REGISTER button, people already know who you are and what you do, no need to be an overkill. 3) Foster human connection, help people talk Knowing how to network and meet people is one skill which can be used no matter what your job title is. And the digital world makes it easier to do it, without having to stand in long impromptu meetings with strangers you don’t know how to approach. Attendees can be encouraged to meet other like-minded humans before, during, or after webinars, and as an organiser, there are ways to facilitate social interaction. What to do: Allow people to engage: create a social media/email group for attendees, and make it easier for people to interact. If your streaming platform has a built-in chat functionality, turn it on. Let people know who else is attending, and offer to make them connect via their preferred channel. This applies mostly to companies in a niche industry who are looking to expand their industry relations network. Make a public list where people can leave their details to be contacted by others, or ask the attendees permission to share their info with others. If you host a series of webinars, consider using a platform which allows attendees to build their own profile featuring their LinkedIn photo, email, current location, and social handles. 4) Doing a dry run isn’t optional Your planning is going great. You have an impressive angle for the presentation, some great people with extensive expertise on the matter as your guests, and the attendance list is almost full after only one day since opening the registrations. You are ready to celebrate your success. But first, let’s make sure your efforts are not in vain. You need to make sure that people can hear what you are saying, they can see what you are presenting and that your branding is not taking up half of your screen. Two weeks ago, I attended a content marketing conference, which was streamed live on Facebook. The moderator was screaming in the microphone from his bedroom, the first speaker wasn’t sure how to enter the presentation mode, and the second one was nowhere to be found. Five minutes spent listening to them trying to get ahold of each other, while other attendees were expressing their thoughts on the live chat. It wasn’t pretty. I left that chaos after my brain got tired of trying to understand what was happening. Presenter mode is shown to the audience on live Facebook stream What to do: Make sure you know how everything works. You need to know every technical detail by heart: every button, every arrow, every hidden tab. Avoid live awkward situations. You want people to stick around, and I guarantee they will not if you spend five minutes figuring out how to unmute yourself. As the organiser, don’t put anyone on live mode without doing a test beforehand. Preferably some days ahead of time, so you can identify possible issues that can arise. Solve any technical problems, train your presenter, make sure everything is crystal clear. Steer clear from blockers. Make the process as effortless as possible. As a rule of thumb, never ask attendees to download any software or create accounts on paid platforms. No one will make an effort since there are so many great options to stream content online, using a regular browser, without paying anything. 5) Make an effort, even if you’re speaking from your kitchen If lockdown taught us something, it’s humanity. With so many people having Zoom meetings from their home, it’s inevitable not to see some private aspects of one’s life. Seeing someone’s pets in the background, hearing some kids screaming, or the washing machine is the new normal. And while everyone can understand the struggle, there are simple ways in which you can make your environment feel uncluttered. What to do: Even if you are talking from your bedroom, living room, or home office, make sure the background is clean, minimal, and preferably in a light solid colour. Use a white wall or a projector screen as a backdrop. Remove personal objects scattered around you. If you can’t avoid showing a part of the room you’re in, then make sure everything is tidy, clean, and looks put together. 6) Watch out for nonverbal cues, don’t scratch your head With the laptop’s camera less than 30 centimetres away from someone’s face, it’s effortless to watch their reactions and interpret accordingly. This process is performed by the human brain automatically. Most people can distinguish between human emotions by reading facial expressions, as the American Psychological Association points out. But not only your face can tell people what you really think about a subject. Your hands play a bit role in how people perceive you. The best strategy? Cool down your nerves, keep your hands together and try to stay relaxed. What to do: Keep your back straight and watch your posture throughout the call. Sit at your desk on a chair, so you don’t feel way too comfortable, and your brain knows you are working. Look into the camera when speaking and make sure your laptop is slightly lifted and people can see your face. If the device is too low placed, they will see inside your nose. You don’t want that. A quick fixer is to put your computer on a stable high position, like on a pile of books, or a sturdy box. Don’t put your arms on your head, touch your nose, arrange your hair or play with objects. These little gestures can be very distracting, and the last thing you want is for people to watch what you are doing instead of listening to what you have to say. 7) Never spam people Sending people countless reminders, emails, notifications, is not the way to attract an audience for your stream. The only thing you can accomplish of you bombard potential attendees with messages is pretty simple and straightforward: people will think you are pushy, desperate, and they will not join your events ever again. A colleague of mine received as many as 15 emails with the same information, on the same layout, with the same call-to-action, for one single event. She passed on that opportunity. And I am sure most people would do the same. What to do:
https://medium.com/swlh/what-i-learned-from-watching-15-video-streams-in-one-month-as-a-marketer-9548542bd339
['Bianca Mathe']
2020-05-26 08:28:17.748000+00:00
['Technology', 'Social Media', 'Marketing', 'Digital Marketing', 'Business']
449
Becoming Root Through An SUID Executable
Becoming Root Through An SUID Executable Linux Privilege Escalation By Exploiting The SUID Bit Photo by Kevin Horvat on Unsplash Welcome back to the Linux Security Series! In this series, we’ll discuss security issues that affect Linux systems and common misconfigurations that lead to them. Let’s get started! Privilege escalation is a way that attackers can escalate their privileges on a system. For example, let’s say that an attacker has gained access to your web server, but only as a low privileged user. They cannot read or write sensitive files, execute scripts, or change system configuration. How could they compromise your server and maintain their access there? If attackers can find a way to trick the system into thinking that they are the root user, they can carry out more powerful attacks like reading and writing sensitive files and inserting permanent backdoors into the system. And this is where privilege escalation comes in. Today, let’s talk about how attackers can exploit SUID programs to escalate their privileges to become root. The SUID Bit SUID stands for “SetUID”. It is a Linux permissions flag that allows users to run that particular executable as the executable's owner. For example, if a file is owned by root, the program will always run as root, regardless of who started the execution. Why would this functionality be useful? A common use-case for SetUID is the password change utility. To change your own password, you would have to modify sensitive system files, such as /etc/shadow . This file is normally only accessible by root users, so you will need to have root privileges to carry out a password change. But since the system doesn't want to give a normal user root privileges, SUID allows users to obtain root privileges only when running certain programs. In this case, SUID on the password change utility allows users to gain temporary root access to change their passwords without obtaining root access across the board. For the most part, this is a normal and necessary behavior. But if an attacker can find a way to execute arbitrary code when running these SUID programs, they can exploit the temporary root access to execute code as the root user on the system! For example, let’s look at the “Vim” file editor first. Let’s say that Vim is owned by the root user and has the SUID bit set on a system. This means that whenever a user runs the Vim editor, Vim is running with root privileges. No biggie, right? This setting could actually spell the death sentence for your system because you can actually run arbitrary system commands from within the vim editor! To run commands in Vim, you have to type the characters colon bang “:!”, then the command you want to run. For example, to run the “ls” command, you can type “ :!ls ” then press enter. You should now see the results of the ls command on your terminal. So this means that if the Vim executable has the SUID bit set, the attacker can execute system commands as root from the Vim editor! Other editors, such as “more” and “less” also allows for command execution from within the program. You can type the bang character “!” then the command that you want to execute for these programs. For example, to run the “ls” command, you can type “ !ls ” then press enter. You should now see the results of the ls command on your terminal. Besides file editors, another program that allows users to run arbitrary system commands is the “find” command. The find command is usually used for locating files and often has the SUID bit set to allow users to find files across the system. But find allows the execution of system commands through the “ -exec ” flag! For example, to run the “ls” command from within the find command, you can use the command “ find . -exec ls \; ”. So if the find executable has the SUID bit set, the attacker can execute system commands as root! Escalating Privileges Using The Vulnerability These misconfigurations make privilege escalation trivial. For example, an attacker can use the ability to execute commands as root and add themselves as a root user in the /etc/passwd file. This command will do just that. echo “vickie::0:0:System Administrator:/root/root:/bin/bash” >> /etc/passwd This command adds a root user with the username of “vickie” and an empty password. Since “0” is the UID of the root user, adding a user with the UID of “0” will give that user root privileges. This command is not possible for regular users because only privileged users can modify system-critical files such as the /etc/password file. More SUID Dangers Programs that lead to privilege escalation when run with SUID are not just limited to programs that allow for arbitrary system code execution. Any programs that allow arbitrary writes to system files are owned by root and have the SUID bit set can lead to privilege escalation. For example, if the file editor “Nano” has the SUID bit set, the attacker can use Nano’s root permissions to open the “ /etc/passwd ” file and add themselves as the root user directly in the file editor! And the system utility “cp” is used to copy and overwrite files. If it has the SUID bit set, attackers can tamper with any file on the system by overwriting the original file with its root privileges! For example, the attacker can create a copy of the original /etc/password to a file they own. Then, they can add themselves as a root user by editing the copy of the passwd file. Finally, they use the “cp” command to overwrite the original /etc/password file with the modified one. Be Careful! You can see that SUID could become incredibly dangerous when misused. SUID rights should only be granted to programs when necessary and not to programs that allow command execution or arbitrary writes to files on the system. Thanks for reading! Next time, we’ll dive into more privilege escalation techniques that attackers can use to compromise your system.
https://vickieli.medium.com/becoming-root-through-an-suid-executable-47473173a6ec
['Vickie Li']
2020-10-30 20:02:16.025000+00:00
['Cybersecurity', 'Hacking', 'Programming', 'Linux', 'Technology']
450
How I’ll decide if I want to join your startup
I am a good salesperson. That means I pay for myself in dividends, and I care a lot about what we are building together. Attracting thoughtful and gritty sales talent is not easy. If you’re a leader and want people like me to join your team, here are the four things you should know. Our first interaction should be researched and personalized. Recruiters reach out almost daily. You will only cut through the noise if you are personal, specific, and researched. The most recent message I received from a recruiter was: “We are currently expanding our commercial sales team and are looking for a new Commercial Sales Representative. I’ve reviewed your profile and believe you may be a fit for the role. Should you be interested, I would like to discuss the opportunity with you in more detail.” I’m especially uninclined to respond to a message like this. If you have the audacity to claim you reviewed my profile (I hope that’s true!) and really believe I might be a fit, then you have the context to describe why. Is it my industry experience? The types of companies I’ve had success with in the past? Something I posted? Something specific your company is looking for that I have? I get it. These messages are canned and recruiting is a numbers game. But I feel pretty strongly that if you’re trying to hire an epic sales rep, that 10 personalized messages to strategically sourced candidates will yield higher responsiveness than 100 unpersonalized ones. When I look back, I’ve only had serious job-related conversations with startups that either come my way via an introduction from someone I trust, or from a personalized note from a recruiter or hiring lead. Equity shouldn’t be an elephant in the room. If you’re going to tout equity as part of your “holistic comp package,” you better be prepared to discuss what it means and how I should be thinking about it. Equity is a motivating concept for a lot of candidates, and sometimes joining a startup means you make some sacrifices in anticipation of that ownership. I’ve thought a lot about why this topic is so cagey in most interview processes. I presume some hiring managers are scared of making ‘promises’ about equity that can’t necessarily be kept. Or that the value and nature of that equity are subject to change in the future based on things like fundraising. But neither of those are good reasons to be unexplicit about what you’re offering. Too often, equity is simply offered as a number — a number of shares or basis points — and candidates are expected to believe blindly that it’s a reasonable one. There’s no open dialogue around total outstanding shares, valuation, fundraising plans, dilution risk, why you picked that number, and so on. Why should I as a candidate get excited about ownership in your company if I can only conceptualize the value of it by using my own imagination? Paint the picture for me. Candor is reflective of your culture. I want to know what’s real today. It’s great to hear about and to discuss the upside of joining your company. TAM, the product roadmap, your pipeline, your goals. I love it and hopefully there’s no ceiling in sight! But let’s spend a lot of time talking about what’s real right now. What evidence do you have to prove the reach and appetite that your product has in the market? Why are your existing customers paying you money today? My long term goal is to help transform your business into a household name. But my short term goal is to know why the product has legs today so I feel confident that I’m going to make an immediate impact (and, well, money). Give me space to do my own diligence. You’re highly invested in figuring out if I’m the right candidate for the role you’re hiring for. Right back at ya! The perfect candidate will be game for hours of interviews to prove they’ve got the chops for your needs. And the right employer will do the same for the candidate. The general consensus is that joining a startup is a risky endeavor. I’m empathetic and cautious to that point of view, but I’ve developed my own diligence process that makes me feel confident and secure doing it. Personally, I want to assess your tempo — looking for goldilocks indications that you’re moving quickly but not recklessly. Good example: Veeva Systems. Bad example: WeWork. I want to hear about the milestones — macro and micro — that you have hit along the way. And further, what ‘next steps’ were unlocked by hitting those milestones? There must be at least some reasonable methods to the madness. Every candidate should have their own unique approach to diligence, and I would encourage you to experience it as a strength when it shows up. Diligence indicates a candidate is serious, curious, thorough, and solutions-oriented, which are four skills you’re hopefully vetting for in a rockstar sales candidate. That’s my inside skinny on acquiring sales talent that will thrive and grow under your leadership. No doubt the list will change and mature over time, but I suspect these themes resonate with a lot of salespeople considering a jump into the startup trenches with you. I can say for sure that, had any of these boxes remained ‘unchecked,’ I wouldn’t have joined Proton. And I wouldn’t have joined Proton as prepared to win as I did. The investment that every person made in the hiring process was immense. My team painted the full picture of what it’s like to work at Proton, filling me in on what I’d missed in the 2.5 years since the company’s inception (including how exactly the product and customer base had evolved over time). And they were never shy about sharing the unique characteristics I brought to the table that they found specifically exciting and applicable to the business. It made my (admittedly autonomous) onboarding feel faster, happier, and made it so I could build processes and close major deals within 6-months on the job.
https://medium.com/@jill-voege/how-ill-decide-if-i-want-to-join-your-startup-c1a974557001
['Jill Voege']
2020-11-23 16:28:08.331000+00:00
['Technology', 'Recruiting', 'Startup', 'Unicorn', 'Sales']
451
The EV revolution is (really!) coming
But this is just one aspect of what is coming. Let’s first of all consider why it make so much sense for China to push this kind of policy and phase out of ICE technology. China ranks 2nd, after USA, in the world for Oil consumption, accounting for around 13% or the total. Even if China is the 4th top producer of Oil in the World, this is not even close to be sufficient to cover its consumptions. Therefore, China is a Net Importer of Oil, and in fact, it imports around 60% of the total consumed every year. In 2016 this could translate into 7,5 Millions of barrels a day. Nowadays a barrel of oil costs around 50 USD, so this is around 350–400 Millions of USD every single day! It can be easily understood that China have all the interest into diminishing the total amount of Oil that it needs to import. Every dollar they use to push the change to EV of consumers, is actually helping them to keep one more dollar into the country, and to make it flow in the national economy. But of course, is not only that. China had, in the last years, a huge problem of pollution. Its cities were becoming more and more dense of PM2,5 and its air every day more dangerous. There was a need for a change of policy, not only for the car industry, but also in the way energy is generated and managed, and that is exactly what is happening. And since China started to take action a few years ago, the results are slowly but surely becoming visible in the sky’s color. Finally, China is taking more and more the central stage as the new Leader of the World, both economically and, even most importantly, politically, specially after the Trump’s Administration. The Environment Protection and the Energy Transition are surely to be the biggest problem that we will have to solve as humans in this century, and China want to give the general direction here, to prove that this is “China Century” we are living in. Altought the political side could change with Biden’s presidency, it is difficult that the economy will shift away from the Asian giant in the short term. Put all of the previous considerations together, and you will get why a change of policy in the automotive industry, could really mean a revolution for the whole world this time. Let’s recap for a moment. We have a strong push that is coming from one of the Top Economies of the world, to back a full transition from ICE to EV vehicles. This push translate from one side to the subsides that are given to everyone that buy electric. But even more importantly, this also translate into the backing up of several Chinese EV makers and battery producers, as we will see, and into enormous investments into the infrastructure needed for this transitions. The numbers expected are huge, and this is opening up the chance for new actors to come to the scene. Who are than the main EV producers which are coming into the spotlight and are seen as having the potential to take the Automotive Market by storm in the coming decade? Let’s check some of them. Photo by Martin Katler on Unsplash Tesla. This is obvious, so I will not spend so much time talking about this company and its visionary founder Elon Musk. But I want to remark once again the numbers we have seen before. If Tesla could only retain it’s 12% market share in China, and delivery somehow a comparable number of cars globally, it would need to scale up from the 500.000 vehicles of 2020, to around 5 to 6 Millions in 2035. Nevertheless, the total China market would account for a total of 15 Millions and Tesla would take around 1,8–2 Millions of them, leaving space for another 13 Millions sold from other companies. Keep this huge number in mind. 13 Additional Millions to be divided from other Brands. The cake is huge and could be enough for everyone, is not even a question of competition anymore. Nio EC6 Nio. Everyone is now talking about Nio like the “New Tesla” or the “Tesla of China”. But Nio is also so much more because: They are backed from Tencent, one of the Giants of China’s Economy They have a peculiar business model, they target the “New Riches” of the growing Chinese middle class, and they offer the Battery as a Service (BaaS), which helps them to decrease the starting price of their cars. The BaaS, whit the possibility to replace the battery into swapping stations in just 3 minuts, as opposed to waiting for charging, is also backed by the Chinese government, which have recently announces the plan to define the swapping standard. The plan as been dubbed “Blue Sky” and guess what is the Chinese name of Nio? Weilai…Blue Sky Coming. Nice coincidence! If you think about it, it make so much sense to have a distributed net of batteries which can be connected to the main energy grid, and act as smoothers for energy’s picks requests. Swapping stations can become a integral part of a smart grid. No wonder government is pushing for this adoption. Nio’s battery suppliers is CATL, which is also the battery supplier of Tesla’s Shanghai Gigafactory, and that will also be a partner into the Battery Swap Service and BaaS. CATL is also backed from the Chinese government, which actively invest into the rare elements needed to produce batteries, practically giving costs advantages to companies such as CATL. CATL, by itself, is therefore another actor to be kept in mind in this EV revolution. This November Nio have just announced a new 100 kWh battery pack, which will take the range of their cars past 600 km. A 150 kWh is currently being also developed, which could take the range to 900 km. Still scared about EVs range limitations? So far they have been selling only SUVs, but a new Sedan should be on the way and be officially presented on January 2021, and possibly another one during the year. China is the biggest Sedan market in the world, so if they can already achieve so big results, without even takling this market… Nio have seen the deliveries numbers skyrocketing this year, and will close the year with around 45000 deliveries, growing around 150% from last year alone. They are prospected to achieve 100.000 vehicles next year, growing once again more than 100% from current level, and some analyst thinks they could take up to 30% of the global High End market of EV in 2035…again, we are talking about maybe 1–3 Millions of cars only in China, starting from less than 50000 this year! Is a giant leap, which call for massive investments into production capacity. But still, is not even close to saturating the market. Li ONE Li Auto: this is not exactly a full EV maker, as it currently produce only one SUV, which has a range extender, a small ICE that is used with the only purpose of charging the batteries, increasing the total range of the vechicle. But the deliveries of this SUV have been quite good recently, and by the end of the year they should sell more than 30000. This is quite impressive, like the fact that they have already a good gross margin on its production. They have probably the chance to catch up, but can they put out new vehicles in the market fast and still with good results? Will they go into the pure EV space? Do they have the money to make all these investments in such a small time? The market is surely open enough to welcome them too Xpeng P7 Xpeng: this is the brand which is more trying to follow and emulate (someone say copy) Tesla. The new P7 is quite nice as you can see, and is a direct competitor of Tesla Model 3. Their prices are quite cheap compared to Tesla, so they are trying to get the lowest price piece of the EV cake. This is quite risky, as margins are going to be lower than other competitors, and this is making difficult to reach profitability, specially at the beginning, where they will need to make huge investments to achieve volumes and to roll out new vehicles. But Xpeng is also the only chinese full EV company to currently own their production facilities, they are even building their second plant already, and they are backed from the local Guangzhou government, as well as from the other big Tech Giant of China, Alibaba. Also, they are very active in the development of sotware required to achieve the autopilot stage, with many managers coming from Xiaomi for this purpose. Their deliveries are also growing at a fast rate, with very good numbers of P7, but it seems that the other model, the G3 SUV, is not performing as well, and this could be a problem long term. BYD Han
https://medium.com/@stefanoosellame/the-ev-revolution-is-really-coming-cb1a14d7e4a8
['Stefano Osellame']
2020-12-11 01:46:39.081000+00:00
['Self Driving Cars', 'Technology', 'Tesla', 'Cars', 'Electric Car']
452
Boosting student’s mental health — CDTM Center Venture 2020
The Centerlings behind this year’s Center Venture As almost every year, this year the CDTM offered the popular Center Venture elective again. Despite the challenging situation due to COVID-19, the two-week elective in a hackathon format took place in a hybrid setting resulting in two impactful project outcomes. Following past topics such as entrepreneurship in robotics or student housing this year’s Center Venture focused on the problem space of student’s mental health in cooperation with TUM4Mind and Make Munich Weird. What is an elective at CDTM again? As part of the CDTM curriculum, every student has to take part in at least three electives. Electives cover a variety of topics and are designed to complement the core courses by offering students the opportunity to deepen their knowledge in an area of their choice. Their format ranges from spread out sessions over a semester to intensive hackathons. You can find out more about the CDTM study program here. Student’s mental health, an overlooked topic? To understand the problem space, the Center Venture began with an input lecture providing background information on mental health. We learned that in recent years the number of diagnosed mental diseases of students in Germany has increased significantly (Barmer Arztreport, 2018) and that one in four students feels strongly stressed (Techniker Krankenkasse, 2017). This issue is currently only partly addressed in Munich by offerings such as our partner TUM4Mind’s awareness week organizing self-help online workshops. Besides, we received further input from our second partner, Make Munich Weird, a transdisciplinary platform organizing initiatives that activate spaces and foster diversity for a more livable Munich. Keeping the aspect of physical space in mind we learned from Chrissie Muhr, Creative Director at Vitra, how rooms are influencing mental wellbeing, and from Prof. Isabell Welpe about the history and principles of hacking. How can we create awareness and shape an open dialogue? Now having a better understanding of the problem, we set ourselves the goal to facilitate a lasting open dialogue about mental health within our universities. To achieve this, we applied our creativity and technical skills to hack student life in Munich through physical and digital dimensions. To come up with specific ideas and to ensure user-centricity we used a design-thinking approach following the double diamond process. If you study at CDTM you will encounter this graphic a lot of times… In the first phase, we listed our worries and topics that we are struggling with and tried to identify their underlying factors and drivers. We then looked for patterns and clustered them into four related topics that you can see below. Our sources of worries and struggles Based on the identified problems we came up with a toolbox of solutions by asking ourselves for advice we would give our best friends in case they were to face that problem. Toolbox to solve the identified problems In the last step of the ideation process, we then combined the identified problems and the toolbox to ideate projects that address all students in Munich and cater to their needs. By prioritizing according to impact and feasibility with respect to our limited time during the Center Venture we then chose two out of our 24 ideas, namely the Wall of Failure and Meet Other Students, which we immediately started implementing. Within 20 hours we created two websites, did a guerilla marketing campaign, got a partnership with a café, put up two physical walls of failure, marked a designated meeting bench, and had a lot of fun. This once again showed us how little you actually need to change something. Using a world café approach, we came up with 24 separate ideas Wall of Failure In our performance-driven society, there is a missing dialogue about failure and setbacks. They are seldomly publicly shared and people don’t like to talk about them even though failures are a natural part of our life. The Wall of Failure addresses this issue by offering a space to anonymously collect and share stories about failure from students for students. Our goal is that this leads to more open communication about expectations at university and that students feel less pressured. The Wall of Failure exists in physical versions at the Technical University of Munich and the University of Munich as well as in the form of a website and a newsletter. You can find the website here: https://wall-of-failure.org. One of the two physical walls in front of TUM’s Mensa Arcisstraße Screenshot of the virtual Wall of Failure Meet Other Students Munich is home to over 100,000 students, many of them international, attending several different institutions. Due to the current restrictions related to COVID-19, it is very difficult for them to get to know new people. Especially many of the students who recently moved to Munich feel isolated and are looking for ways to establish new contacts. Meet Other Students is a Slack community that solves this issue by randomly matching students on a one-on-one basis every week. Students have the chance to meet virtually or in-person, depending on which Slack channel they join. So far over 300 students have joined the community and the feedback has been overwhelmingly positive. You can find more information on the matching process and a link to join the community here: https://www.meetotherstudents.org. Overview of the matching process Students warmly welcomed the idea Last but not least, a résumé This year’s Center Venture was special in some ways. It was the first one to take place in a hybrid format — something rather challenging for a hackathon with a physical component. It was also the first time that a serious and pressing issue such as mental health was addressed with none of us being experts in this field. Judging by the feedback that we received when presenting our solutions on multiple occasions we can see, however, that we were able to achieve our goal of sparking discussions on student’s mental health and create awareness. Thank you very much to everyone who contributed to this year’s Center Venture and made the elective possible! What is the CDTM? The Center for Digital Technology and Management (CDTM) is a joint institution of the two universities in Munich, the Ludwig-Maximilians-Universität München (LMU) and the Technische Universität München (TUM), offering the interdisciplinary add-on study program Technology Management. Students from various study backgrounds with creative ideas, great motivation, and an entrepreneurial mindset are offered the tools to put their ideas into practice. Find out more about the CDTM here.
https://medium.com/cdtm/boosting-students-mental-health-cdtm-center-venture-2020-867494f30415
['Matthias Heinrich Morales']
2020-12-26 08:02:44.161000+00:00
['Hackathons', 'Technology Management', 'Munich', 'Mental Health', 'Cdtm']
453
Are Lyft customers nicer than Uber customers?
Every Lyft driver I rode with mentioned it, “Lyft customers are much nicer than Uber customers.” They’re not sure why, but the attitudes are miles apart. Most drive for both Uber and Lyft so they would know, and regardless of city they say the same thing. What then, does it say about the brands themselves? The drivers I spoke with in the NYC area and Boston describe Uber customers as being rude, demanding, and full of complaints. Is it Uber’s infamous surge pricing practices that put customers on the defensive? If so, should Uber tell its customers to “Uber-have?” Drivers said Uber customers make them feel “like a second class citizen”, saying they are “angry and bossy”, and often leave “a mess of food and drinks.” One driver even mentioned an Uber customer who brought a huge dog into her tiny car, while another started polishing her nails — fumes on high. Lyft customers go the other way. Drivers say most are “extremely considerate” and “pleasant to deal with.” One driver said she feels safer when customers are happier, “when there’s no stress in the air, we lower our chance of an accident.” A recent tweet by a Lyft driver goes a bit further: “I picked up a passenger and we got along so well we ended up grabbing dinner. I love people!” But that’s the odd part, the service between Uber and Lyft is the same. Exactly the same. Same car, same driver. Same same. So why the difference in disposition? Are Uber customers reflecting back on the brand they’ve come to know and be wary of? Mistrust has been known to put a damper on budding relationships. Perhaps it’s Lyft’s pink moustache of days gone by that inspires more smiles more than scowls. Or, maybe it’s how the brands approach new customers. The homepages of Uber and Lyft provided some direction. Uber.com has rotating images at the top, most of which target new drivers — not riders. Lyft.com is all about the customer, and makes it easy to get a car with a dead simple form. Competitive brands like Gett are coming up fast with a solid product, growing market share and global ambitions. In a sector where disruption is the norm, both Uber and Lyft need to keep their eyes on the road — and in the rear view mirror. Smart underdogs like Gett have a history of blowing past the status quo. Bigger picture implications? Brands not 100% customer-obsessed (especially those in the competitive segment of transportation) should hit the brakes and get ready to make a u-turn. — Anthony Cospito, Managing Director, Popbox Digital Originally published at popboxdigital.com on June 24, 2015.
https://medium.com/business-startup-development-and-more/why-are-lyft-customers-nicer-than-uber-customers-543e1918f875
['Anthony Cospito']
2016-04-19 19:08:48.751000+00:00
['Mobile', 'Business', 'Technology']
454
“The Role of a Leader is to Help Others to be More Successful”
As a part of the Software Engineering chapter at Porsche Digital, Javier Donado creates digital products and helps to launch them into new markets. He draws from various professional experiences and is constantly experimenting with the latest technologies. Besides technical skills, Javier aims to be a good leader and to keep his mindset open to new ideas. Javier, what is your job at Porsche Digital? There are many tasks that we software engineers carry out at Porsche Digital. But if I had to briefly describe it, I would say that I help to create new digital products by turning ideas into compelling working solutions. We are constantly bringing new products to the market and trying out all kinds of business experiments to build and create digital customer solutions. My job is to find out first how the available technology can help to test our hypothesis in a feasible and convenient way. Together with a multi-disciplinary team, I then coordinate an action plan so that we can turn it into reality. Although this is the activity that takes up most of my time, there are many other activities that my colleagues and I usually carry out in the company and which I find quite interesting. For example, we also help our business management and partnering colleagues to evaluate technical related risks when scouting for new ventures or we provide advice, support and maintenance of internal tools that we use in the company for our own processes. What is the most crucial skill in your job? You need to be open to new ideas and often reinvent yourself. If you are convinced that you are already an expert of one special technology and you keep doing things the same way, chances are high that you miss out on better solutions. Why might that happen? Because everything will look like a nail to you if the only tool you have is a hammer. It is therefore very important for us to be open-minded and to not restrict ourselves to the technical part of our job. Instead, we try to see as much as possible of the big picture and learn everything we can from our colleagues that are specialists in other areas. In this way, we can create business value together as a team. Doing that in a sustainable and maintainable way is also an important part of the technical challenges we face. Having held various technical leadership roles in product teams, what was the biggest learning for you so far? I have learned that good ideas for technical solutions can come from anyone in the team — even from people who have no technical background, and it’s your responsibility to listen to them all. What do you think makes a good leader in the digital age? In my opinion, the role of a leader is to help others to be more successful. To achieve that, you need to pay attention to what they have to say — that applies to almost every period of time, and the digital age is no exception. Good leaders will always be open to new ideas, no matter how experienced they are. Today, with things changing dramatically within a few months, we need more than ever the mindset of a beginner if we don’t want to be left behind. Why did you become a part of Porsche Digital? What fascinated you? Although I wouldn’t say that I’m a car person, I have always found Porsche fascinating. Since I was a child, my favourite toy was a 911 model car. When I grew up and studied engineering, I always had a deep respect for the German engineering tradition. Being a part of that tradition is an honour to me. Porsche Digital is the perfect place for someone with my background to keep on contributing to this engineering tradition while at the same time adding value for Porsche in the digital age.
https://medium.com/next-level-german-engineering/the-role-of-a-leader-is-to-help-others-to-be-more-successful-d147771ad659
['Porsche Digital']
2020-12-03 13:17:01.428000+00:00
['Company Building', 'Leadership', 'Software Development', 'Career Development', 'Technology']
455
Return to the Future
In 2020, our society is divided. Technological advance has stalled. We’ve lost coherence and dynamism. People born in the early 20th century saw the arrival of airplanes, telephones, cars, televisions, computers, and spacecraft. They expected continued progress. But somewhere along the line, reality went off-road. Man last stepped on the moon 48 years ago. The sci-fi futures our ancestors envisioned look as far in the future today as they did then. Most think 2020 will be remembered for its chaos: plans derailed, businesses shuttered, windows shattered, and sidelong glances at masked neighbors. But chaos brings clarity. We have lost the future because we’ve lost our common cause. If we can return to a society based on shared values, we can converge on a vision for the future. If we share a vision for the future, we may be able to build it. Today, we must build the future our values demand. Most dread aging; we seek eternal life. Most gaze passively at the stars; we design the machinery of exploration. Most hope purpose finds them; we live with purpose every day. We are proud to announce Praxis: a new society that supports ambitious founders, creators, and pioneers in discovering and executing on their life’s work, while working together towards a shared vision for the future. 2020 won’t just be remembered for its dark moments. With courage, vision, and faith in the frontier, we can ensure that 2020 is remembered as a turning point: The inception of a new golden age.
https://medium.com/bluebook-cities/return-to-the-future-aecc4e547cfc
['Dryden Brown']
2020-12-03 21:49:56.054000+00:00
['Technology', 'Society', 'Cities', 'American History']
456
Synbit synthesizes everything, creating a brand-new world
As a decentralized synthetic asset issuance protocol running on Ethereum, Synbit protocol can be used by anyone to recreate traditional financial products and financial derivatives based on cryptography currency, etc. Providing traders with a wide variety of digital assets and traditional financial derivatives trading, Synbit is committed to offering users a safer, more convenient and more efficient synthetic asset trading platform. Compared with the current synthetic asset platform, there exists a tremendous room for enhancement in security guarantee, risk control, issuance mechanism, trading experience and reward mechanism. Synbit creates a mirror world for real-world assets, which makes assets in traditional trading world traded on the chain via Synbit. On synbit, we can buy and sell stocks, real estate, precious metals and commodities. Of course, only the virtual assets of these financial assets have the access to be traded on the chain. However, we can anchor the asset prices of these trading objects in real time and explore the financial boundary through Oracle. Synbit is primarily used to create synthetic assets on the chain and the value of all synthetic assets is supported by its mortgage assets. When the mortgagor creates the synthetic assets, debts will be generated. The collateral assets are currently preset as ETH, DAI, SYN, USDT and USDC. When the mortgagor creates the synthetic assets, debts will be generated and he or she must destroy the synthetic assets to repay the debts before unlocking the mortgaged assets. Also, users can directly exchange with other sorts of synthetic assets by purchasing synthetic assets. Apart from synthetic assets, users can also directly exchange with other sorts of synthetic assets by purchasing synthetic assets. Users will get double rewards in the process of creating or purchasing synthetic assets: (1) Benefit from trading (2) Provide liquidity for the platform to gain revenue. Encourage users to provide liquidity for the platform and promote ecological development through rewards. In addition, Synbit adopts a unique asset trading model: firstly, users do not necessarily need a counter party to trade assets. When a trader hopes to exchange pUSD into pBTC, he or she can convert it directly based on the current price without a counter party; Secondly, Synbit provides unlimited liquidity in theory without trading slippage, which effectively addresses the liquidity and slippage problems faced by DEX (Decentralized exchange). Why do people use Synbit? 1. Synbit helps investors reach a broader asset class. The transaction can be carried out without actually holding the asset through synthetic asset transaction. You can trade the asset, for example, even if you don’t hold btcor Apple Inc. stock. This sort of transaction can not only reduce the friction of asset exchange, but also carry out rapid exchange between different types of assets, such as Tesla stock, gold, oil, bitcoin and other different assets. Therefore, synthetic assets can help investors reach a wider variety of assets and enable assets to reach a wider variety of users. 2. Realize arbitrage Linked to USD, pUSD is traded in the open market, and the price fluctuates inevitably, which may be higher or lower than USD price. The participants of synthetic assets trading generate synthetic assets by mortgaging specific assets, thus realizing arbitrage from asset price fluctuations. In addition, if users expect an asset (such as BTC) to rise, they are able to obtain the opportunity to earn income by purchasing the synthetic asset (pBTC) of the asset. The price is the same as that of the real asset, which means that once the user purchases, he or she also accepts the possibility of the asset going up or down. Suppose users expect an asset (such as BTC) to fall, they can earn income by selling the synthetic asset (pBTC) and enlarge the income space by purchasing the reverse synthetic asset (nBTC) of the asset. Being a new concept in the DeFi world, synthetic asset owns a broad development space and unparalleled trading experience. As a token economic mechanism with smooth trading experience and exquisite design based on the concept of synthetic assets, Synbit presents a complete experience of casting, trading and position management. We will constantly verify the security of Synbit and accumulate user trust in practice.
https://medium.com/@synbitprotocol/synbit-synthesizes-everything-creating-a-brand-new-world-a7183c19fea2
['Synbit Protocol']
2020-10-26 08:20:37.565000+00:00
['Blockchain', 'Blockchain Technology', 'Future', 'Blockchain Startup', 'Blockchain Development']
457
9 trends that designers need to know in 2021
WHAT an era 2020 has been!!! oufff !! With one lockdown after another, constant bad news and social distancing, 2020 limited experiences and innovations. Thank god that unlike other pandemics that happened in the long history, covid-19 happened in a world connected by the Internet! When it comes to the design world, new trends and concepts appear every year as the market saturation always pushes the leader to innovate and the followers to keep trying to catch on to the trend. Despite being wildly exceptional, 2020 didn’t stop some major improvements and innovations in the UI/UX world that are expected to shape 2021. Noticeably: immersive 3D elements, soft shadows, and floating elements ! The pandemic led to some serious rapid digital transformations and the establishment of real technological infrastructure in order to fill the gap of human absence and so creating more demands of designers, unlocking team potentials and driving a radical systemic change for the year ahead promises more change and we have a responsibility to reflect on their catalysts and outcomes. Here is the list of design trends predicted to mark 2021 1. A design system is your new team player Digital products have kept the world together during these unprecedented times. But they’ve also revealed deep inconsistency, unexploited opportunities, disinformation, and chaos. Organizations started adopting a design system to approach a modeled progression in their work years ago. Yet what was optional before is becoming a must-have in 2021. A design system is meant to operate as a team by reducing the attempts of synchronization between different departments and the bother to visually design every state of every screen while you can think more efficiently and operate by Assembling all components Line of UX/UI in a contributions space and so fulfill business strategy, create coherence between designer teams and development for Everybody to work on a brand or product line operates the same by using a design system. Design systems are expected to be the main competitive advantage of 2021. 2. Minimalist, minimalism, minimal…UI Minimalist UI design is that one trend that you can never call “NEW” as it appears and disappears every few years. When using a digital solution especially websites and mobile apps, users don’t have the patience to spend time understanding the functioning of the app or scrolling irrelevant notifications and pop-up windows or anything that slows down the use of the digital solution. They are looking for a simple, clear, and easy to use interface. Eye-tracking tests are compulsory to determine which parts of the website to focus on. 2021 UIs are expected to be free of any: Excess information Irrelevant components Seamless navigation Intruder non-design-friendly ads in order to ensure a purposeful user journey through minimal, simple, and time-saving design features. 3. Less standardized…more of individualist Standardization became a respected aspect of design in the early life of the design field setting complex rules to follow in order to ensure consistency and flawless designs. Yet it obviously limited uniqueness and designers’ personal talents to show and almost resulted in the delivery of similar approached products, that look and operate in the same high standards. 2020 helped to demolish this phenomenon remarkably and establishing a competitive vision based on approaching every product differently and showing brand prints in every project without paying much attention to standardization. 2021 is expected to push further the vanishing of standardization! To leave more space for individualist personalized UX that serves every product differently according to its target users. Consumers are looking to engage with brands they can relate to, that feels approachable, relevant, and authentic. To achieve a genuine connection, designers should collaborate with marketers to achieve a storytelling approach of work by creating stories around a product and captivate the user touching the emotional side and creating a personal connection 4. The DARK mode is here to stay 2020 literally made Dark the new light for everyone’s eyes! The dark mode became a must-have feature on every kind of platform whether it is a website, social media, etc….2021 is expected to elevate the dark mode to a survival feature in everything we develop! Bringing a more modern, sleek, sophisticated approach that allows designers to be creative and explore new options to manage design elements. 5. Back to hand-drawn designs Oh yeah just like you heard it! 2021 is bringing back the human touch in design! The harsh Digitalisation and ultra-sophisticated design tools deprived us of natural human drawings, hand-written texts, and an artisanal approach in design. Adding hand-written and hand-drawn elements in design is going to mark 2021 trends to add uniqueness, character, and a personal touch to every developed product. 6. Augmented reality, Virtual reality, and AI DESIGNS Virtual reality and augmented reality were among the main design trends of 2020 included in education, healthcare, museum and house tours, and more. it will remain so in the coming 2021 and beyond. Some companies are even in the creation of a full-blown “virtual office ecosystem”. We’ll soon see more corporate meetings in VR and AI-powered design tools to bridge designers and developers such as design systems. Many design moguls such as Sketch and Figma are willing to incorporate remote teamwork into their features 7. Generative design Another potentially promising UX/UI trend is the generative design. This is the name of programs that can analyze a large array of similar data, find their defining features and characteristics, and then create new ones based on the provided data. algorithm-driven design tools can create layouts and marketing materials, choose visual styles, and generate presentation mockups. 8. Touchless interactions 2020 pandemic made us realize the importance of voice user interfaces in UI/UX. It is speculated that 2021 will bring more focus on touchless device interaction methods including AIR gesture control. We might come to reduce any language barriers, loss of context retention with incomplete tasks, speech impairments..etc 9. More advanced 3D and immersive experiences 3D designs can never be called new, but 2021 is expected to advance the interest in 3D components and entire 3D scenes in interfaces beyond casual levels. Conclusion No matter how much we try to limit our expectations, a human innovative mind can never stop to surprise us. All we can hope for is that 2021 will bring way better fortune than his previous unpleasing year 2020. What do you think of those trends? Are you willing to apply them in your designs?
https://medium.com/@rania-mdimagh/9-trends-that-designers-need-to-know-in-2021-8aa39e720ad6
['Rania Mdimagh']
2020-12-16 14:06:21.766000+00:00
['Design', 'UI Design', 'Trends', 'Technology', 'Design Systems']
458
Tonight’s comic wants to know if we live in a simulation.
More from rstevens Follow I make cartoons and t-shirts at www.dieselsweeties.com & @rstevens. Send me coffee beans.
https://rstevens.medium.com/tonights-comic-wants-to-know-if-we-live-in-a-simulation-29c3444ee370
[]
2019-04-10 04:10:48.067000+00:00
['Philosophy', 'Technology', 'Humor', 'Virtual Reality']
459
Josiane Peluso Highlights 2019’s Best Equipment and Accessories for the Skilled Hiker
If you are looking to hit the trails this summer with friends and family, it may be time to stock up on some of the year’s best hiking equipment and accessories. From safety equipment to new electronics; avid outdoorswoman Josiane Peluso lists the items you can’t miss as you prepare for the summer hiking season. Add them on your list of don’t forget items. Technology — GPS-enabled tech tools keep improving, getting smaller, and becoming more affordable with each hiking season. To improve your safety and security on both short and long hikes, check out these new tech options for the avid hiker before you hit the trails this season. — Garmin InReach Mini — Communication is key, and this tech tool allows hikers to send and receive text messages without cell reception, through satellite technology. A larger version comes with map-enabled features as well. — Garmin Feniz 5X — According to Josiane Peluso data tracking is all the rage, and this GPS-mapping watch is so convenient you can wear it every day. It offers hikers the ability to load GPX map tracks to help you follow the right path and it also tracks heart rate, sleep patterns, and workout data. You can connect it to your smartphone for easy data readouts. Storage Capacity — A comfortable new backpack is just the thing to get you out on the trails this summer. Check out REI’s Co-Op Tarn line for hikers of all sizes and ages and priced at less than $50 each. Josiane Peluso shares that the most important things to look for in a hiking backpack are accessible water holders, ventilation to allow for airflow, padding in the arms and hips, and sizing (just enough for everything you need to hold but smaller than everyday packs). Water Safety — The Sawyer MINI water filtration tool is a great way to ensure you stay hydrated and safe for as long as you need. The main selling features for Josiane Peluso were the small and light-weight bottle and that it successfully removes 99.9% of bacteria from water sources. For a leak-free water bottle that will keep your drinks cold for your entire hike, check out the YETI’s Rambler Vacuum Bottle. Insect Protection — No one likes creepy crawlers, whether you’re on the trail or off it. To protect yourself from insects on your next hike, try DEET-free REPEL Lemon Eucalyptus Natural Insect Repellent. For off trail hiking, use Cutter Backwoods Insect Repellent; and for camping or backyard BBQ’s, try a Thermacell MR-PSR Patio Shield Mosquito Repellent. Furry Friends — Don’t forget a collapsible water bowl, available is many sizes and models at your local camping retailer or online. A pro-hiker tip from Josiane Peluso is to pick one with a hook to connect it to the front of your backpack for easy access to keep your hiking companions hydrated alongside you. Also, important, check out the Counter Assault Bear Bells inexpensive accessory, which you can attach to your shoes or pack to announce your approach to any furry friends you are not excited to meet on the trail! Other Accessories — If you’re going to be out after dark, a headlamp like the Petzl Tikka XP can help you lengthen those hikes and keep you safe in rough or unfamiliar terrain. They are also useful for camping, whether you’re cooking, making camp in the dark, or reading a good book after the fire dies down. An iconic Swiss Army Knife is also a great hiking and camping tool that’s small and easy to add to your daily pack, but can provide a lot of help on the trail no matter what situation you find yourself in.
https://medium.com/@josianepeluso/josiane-peluso-highlights-2019s-best-equipment-and-accessories-for-the-skilled-hiker-6c120aa73d4b
['Josiane Peluso']
2019-06-25 13:18:05.701000+00:00
['Trails', 'Summer', 'Adventure', 'Hiking', 'Technology']
460
Côte-des-Neiges (Chapter XIV)
I’ve Got Dreams to Remember… “The sign for Playa Dorada Beach Resort will be on your right,” I said. Candy nodded and continued on Calle Principal, the main throughfare in the Complejo Hotelero Playa Dorada, or Playa Dorada Hotel Complex. The place hadn’t changed much since the last time I was there. It looked pretty much as I remembered from the first time I visited the site, in early September of 1989 and every visit since. Like many recent arrival in Puerto Plata, back then I was looking for work at one of the many resorts inside the sprawling complex. The wages — even for a lowly janitor — were ten times that of the average laborer in one of the garment factories inside the city’s Free Economic Zone, or Zona Franca. Being fluent in both English and French — my parents being West Indians and Haitians immigrant respectively — I was sure to land a front-line position in either Guest Services or Food & Beverages. The first thing that shocked me then — and continued to shock me every time I visited or worked a shift— was how incredibly clean it all was. The streets, then as today, were meticulously paved and swept clean on a daily basis, without a single pothole or blemish. The road signs were shinny and new, the lawn expertly manicured and the roadside lined with palm trees, native birds, plants and flower beds — an explosion of colors, scents and sounds that overwhelmed the senses. And not a single piece of trash in sight. Rows upon rows of Leuenbergeria quisqueyana or Rosa de Bayahibe, the country’s national flower — adorned the street. The flower, endemic to the island, is a rare cactus that grows leaves, the flower itself is a delicate pink color. It is native to the Bayahibe area near La Romana, on the southeastern coast. Cubanola domingensis, or campanita, which are small showy trees jostled for position under the sun. The distinctive notes of Magnolia Pallescens, an endemic species of Magnolia perfumed the mid-day air. Flowering red Cayenas or Hibiscus rosa-sinensis, Trinitaria, or Bougainvillea spectabilis Willd and Coralillos — the later being a plant originating in India which has managed to make Hispaniola its home — added a burst of color. Calyptronoma, Hemithrinax and Zombia palm trees, standing guard, arrayed like a pinnately Terracotta Army on either side of the street, with scattered clusters nearby. The sturdy trunks of Roystonea regia or Cuban Royal Palm serving as staggered lampposts for the Mercury-vapor lamps. Of the islands in the Caribbean, Cuba has the most species of palm trees, but Hispaniola, having similar geography and topography, is not that far behind. The ambundance and sheer variety of these plants in Playa Dorada made that fact abundantly clear. The pitch black, recently re-surfaced hard top, with its bright yellow pavement markings, made the perfect canvas on which to project this amazing rainbow of colors, shapes and textures. The roar of the waves, brushing against the white sands just behind the green canopy, added the perfect accent. The distant whiffs of BBQ’d Longaniza sausages, the faint but punchy syncopated chords of Tambora Drums, Güira and Accordion — the implements of Merengue Tipico, or Perico ripiao, the national music — provided the perfect soundtrack for what was shaping up to be one of the best weeks of our lives. “Watch out for pedestrians,” I warned. “Drunken tourists are not known for observing traffic laws.” Not sooner I had finished the sentence that a group of German tourists staggered onto the street with reckless abandon. They appeared to be on their way to beach, oblivious of their surroundings and completely ignoring the tour bus a mere ten feet away. Luckily the driver was attentive enough to anticipate their idiocy, applying the brakes and gently rolling to a complete stop with room to spare. Half naked and fit — both male and female — they made their way down the path next to the Jack Tar Village Resort & Casino, rounded a corner and disappeared from view. The driver released the brakes, shifted the bus into gear and continued on his way. “Fuck!” Candy swore in surprise, counting her blessings we were far enough away to be forewarned. She slowed down again to go over a speed bump, glancing at the flower beds adorning the divided median strip on the road. “Did you see the young girl?” she asked, before the drunken bunch rounded the bend. She was referring to the youngest female in the group. She was topless, like her other friends. Her fresh, innocent face as blissful as her breasts were firm and ripen. She could not have been older than eighteen years old, or four years my junior at the time. “What about her?” I asked, glancing up and quickly looking away. Best not take any chances at staring, I thought. Flying too close to the sun, I remembered, did not turn out well for Icarus. “See how happy and carefree she looked?” She asked forebodingly. “She probably thought she was safe among her friends.” “Huh?” I asked. Unsure where she was going with that line, but deciding to let her speak and see where it led. “She doesn’t realize one or more of her ‘pals’ can turn against her and rape her.” She seemed to be speaking from experience, I thought. “That’s kind of dark,” I said. Quickly adding, “and random. What brought on that thought?” “It happened to me when I was sixteen,” she said matter-of-factly. “It happened at Plage Jean-Doré, during a weekend trip to Parc Jean-Drapeau with some friends from high school.” “Jesus, Candy. You’ve never told me that,” I said, reaching out and taking her hand in my hands. “I am so sorry. What happened?” “There isn’t much to say, really. Two of the guys in the group waited until I was alone and isolated and assaulted me. I fought them off until I blacked out.” “The last thing I remember was grabbing a rock, swinging it with every ounce of strength I had left and hitting one of them on the side of the head. The other one must have knocked me unconscious because I don’t remember much after that.” “Jesus!” I exclaimed in horror. “When I came to I screamed bloody murder. I ran until I found a payphone on the parking lot and called the police.” “They tried to say it was consensual, but I didn’t fucking budge,” she said proudly. “When the police arrived and called my parents they came running. They took me straight to the hospital to get a rape kit, then drove me to The Service de police de la Ville de Montréal —Le SPVM — station in Longueuil and I pressed charges.” “What happened to the two guys?” I asked. Not sure if that was the right question to ask, but unable to think of a better one. “They ended up taking a plea deal and served eight years of a ten year sentence. Each,” she said with pride and satisfaction in her voice. “I’m nobody’s victim.” She said resolutely. She seemed to be reliving the moment in her mind, yet did not seem to be diminished by the experience. While I expected to see pain and shame, I only saw defiance. “My father taught me to never be silent,” she said after a moment. “He taught me to fight, to never be a victim, and never surrender. When I was attacked, I spoke up and defended myself.” I brought her hand to my lips and kissed it. I felt more connected to her, to her life and her story in that small moment in ways I had never been before. “These idiots were about to get run over,” she said looking at group disappear from view, returning to the here and now, and locking away her ghosts — I held onto her hand. “Oh yeah,” I said sarcastically, grateful at the chance to move the conversation onto a lighter topic. “Now imagine a million more like them roaming the country, throwing themselves in front of speeding vehicles. Too drunk to realize they are tempting fate.” “One million?” She asked in surprise. “Yeah.” I said matter-of-factly. “Give or take a few thousands,” I explained. “Maybe less right now that it is the low season, but come September, it will be a mad house.” “Jesus!” She sighted in amazement. “Scrapping dead tourists off the pavement must be a thriving business in this country.” “You would think so,” I said mockingly. “The truth is, everybody here knows how important they are to the economy — and how quickly things would go to shit if ‘dem gringos start dying for any reason — so everybody goes out of their way to keep them alive while they get drunk as skunks.” It was not an exaggeration. By the first decades of the 21st century, Hispaniola was one of the top vacation destination for visitors from the European mainland, The United States and Canada. Its popularity was so that even wealthy visitors from countries in South America began to make the island their port of call. This was due in large part to the island’s rich history as the first permanent European settlement in the Americas, and its unique culture. A culture borne out of an uneasy modus vivendy between Spanish Catholicism, African animism and Taíno or Arawaken traditions. But also because of the warmth and welcoming nature of its people, and the friendly atmosphere. Not to mention its great beaches and tropical, unique climate. “Who are they?” She asked. “The tourists? Where do they come from?” “Mostly Germans, Austrians, Dutch, and British.” “There is a significant number of Americans in Casa de Campo near La Romana, but they represent the upscale traveler looking for luxury.” “Small in numbers, but big spenders?” She asked. “That’s right,” I concurred. “But the government is now courting the regular folks,” I added. “They tend to spend less and stay in all-inclusive resorts like most of these,” I said sweeping my hands, motioning to the resort complex around us. “But they make up for that in sheer numbers.” “Ah,” she said with a knowing smile. “High volume/low margin.” The marketing executive in her understood the concept of economies of scale instinctively. “That explains the French-Canadians,” she said with a laugh. Les touristes Québécois arriving on the island liked to party and have good time, but they were mostly blue collar workers and civil servants. People who knew the value of a dollar and what it took to earn it. Spendthrifts, they were not. “There are some Scandinavians — mostly from Finland — coming in now as well,” I explained, but after a moment of reflection conceded, “but you’re right, these are folks like you and me. People who work for a living.” Beginning in the last years of the 1980s planeloads of eager Canadians, British, Germans, Austrians and even Scandinavian tourists would land at Punta Cana International Airport, Las Américas International Airport, Cibao International Airport and beginning in the first few years of the twentieth-fist century, La Romana Casa De Campo International Airport. “So the whole country is like a giant resort?” She asked, a bit puzzled. “No, not the whole country,” I clarified. “Just the coastal regions on the Southeast and Northwest coasts. Although there are efforts underway to develop Samana Bay in the Northeast and Barahona in the Southwest. But these areas are reserved for ecotourism and cruise ships — as opposed to massive hotels and resort complexes like this one.” A smaller number of visitors also reached the island by sea, via several ferry terminals scattered throughout. The largest of these was The Port of Santo Domingo‘s Sans Souci and Don Diego Terminals, followed by and the Central Romana Port’s Terminal Turistica or Tourist Terminal in La Romana. There were smaller facilities like The Port of Puerto Plata — which was superseded by the Amber Cove facilities in the early decades of the twentieth-first century. This new facility was marketed as the Caribbean’s newest cruise ship destination. Amber Cove Cruise Center⁸, is located on the Bay of Maimon, or Bahía de Maimón near the city of Puerto Plata in the country’s Northwest Coast. The new two-berth Amber Cove Cruise Center is able to accommodate up to 8,000 cruise passengers and 2,000 crew members daily. When completed in 2015, its USD $82 million price tag represented one of the largest cruise industry investments ever made in the country. The Cayo Levantado Port in Arroyo Barril, Samana,was the smallest in the country and was geared primarily for ecotourism, as the area is a popular whale watching spot. Every year around the middle of January, like clockwork, the famous Humpback whales of Samana⁹,¹⁰,¹¹ arrive, having traveled all the way from the North Atlantic to relax and frolic in the warm waters of the Caribbean Sea. “When we come back from our visit to La Romana, we will drive to Cabarete and Samana and I will show you the area.” “It’s quite beautiful,” I promised. She smiled and nodded, pointing to the Puerto Plata Beach Resort sign up ahead. “Ei meu açucar mascavo,” she said, using her favorite term of endearment, “While we are here, what’s something that you’d like to try, but that you‘ve been too scared to try?” The question took me by surprise and I had to think about a bit. I surprised myself even further by my own response, “I would like us to grow closer and get to know each other better.” “I think we can manage that, Meu amor,” she said with certainty.
https://medium.com/the-desabafo/c%C3%B4te-des-neiges-chapter-xiv-64be001429f6
['Juan Alberto Cirez']
2020-11-09 18:40:37.770000+00:00
['Music', 'Mental Health', 'Relationships', 'Technology', 'Fiction']
461
Information Technology
Mathematical and reasoning skills are no longer a human thing. It is becoming the responsibility of the information age to mechanically disperse around life. — Pubudu Siriwansa
https://medium.com/@siriwansa/information-technology-6e3cb340fb9c
[]
2020-12-05 02:07:50.493000+00:00
['Sri Lanka', 'Information Technology', 'It']
462
MONTHLY UPDATE — MAY 2019. May has ended and we are back again…
ALTEUMX EXCHANGE Platform upgrades Integration of Trade and Deposits Email Notifications. Do you wish to stay informed about your AlteumX account and trade activity? Receive immediate updates by turning on your email notifications. Date filtering integration in the Trade and Order history pages Performance optimization in Deposits and Withdrawals pages. Performance updates in the Trading Engine. Persistence in Language Selection. Upcoming: Customer Support Portal Coins and listings Deposits and withdrawals enabled for I/O coin (IOC) with IOC/BTC market open for trading. CONTESTS AND PROMOTIONS 1. AGROCOIN TRADING COMPETITION The AGROCOIN Trading Competition has now concluded and all AGRO prizes have now been distributed to the winners’ accounts. Congratulations! You can check the final results here: https://medium.com/alteum/agrocoin-trading-competiton-complete-3518b3a03893
https://medium.com/alteum/monthly-update-may-2019-49578b2eead2
[]
2019-06-07 21:56:11.877000+00:00
['Bitcoin', 'Ethereum', 'Exchange', 'Blockchain', 'Technology']
463
How 5G will change the world
5G is the Fifth generation technology standard for the broadband cellular network, introduced in 2016. It offers greater bandwidth and a very high download speed of around 1 Gigabit per Second. This is around 600 times faster than 4G. It can support about a million devices per square kilometer that are 10 times higher than 4G. Furthermore, it is expected to have about 2 billion users by 2025. The first country that adopted it on a large scale was South Korea in 2016. The China-based firm Huawei is the leading name of 5G network equipment maker. Here are the ways 5G will restyle your daily life: The world of work One of the biggest impacts of 5G is the industrial and commercial IoT or internet of things. According to research, more than 500 million objects will be tracked by 2023. Precision agriculture will soil sensors and airborne cameras to identify crop disease, determine when to water. Smart factories will be able to deploy connected robots to get repetitive and dangerous jobs done. We have to see all these changes once the ultra-fast wireless networks are in place, accommodating 125 billion IoT devices by 2030. 2.Healthcare With 5G devices, visits to clinics will be rare. Wearables and implanted medical devices will take your vitals and transmit them to health care providers, enabling them to detect early warnings of heart attacks, strokes, etc. This high-speed technology also allows telesurgery where specialists in one hospital can control equipment in another distanced by thousand miles. One telesurgery took place in China, in which part of a pig’s liver was removed successfully using 5G. 3.Smart Cities 5G technology will enable cities to handle data from millions more internet of things and install low power sensors that can last years. This will allow them to smartly control the flow of traffic, air quality, power consumption, security. 4.Driverless Cars human errors are responsible for the majority of road accidents. With 5G, fully autonomous driverless cars, communicating with other cars around them, preventing accidents and traffic by communicating embedded sensors in the traffic lights, road signs, and the pavement to navigate safely. This will eventually reduce road fatalities. 5.Next-level Gaming 5G will take online gaming to a new horizon with its ultra-high data speed. The gaming market will explode as 5G immerses gamers into AR and mixed reality experiences by visually transforming the world around them. Conclusion In a nutshell,5G technology will shape the decades to come. It will bring revolutionary changes from individual to collective spheres of our lives by offering more comfort, ease, and luxuries.
https://medium.com/@engr-muhammadkashifaslam/how-5g-will-change-the-world-33f8f69425c3
['Engr Muhammadkashifaslam']
2020-12-17 18:50:06.221000+00:00
['5g', 'Artificial Intelligence', 'Information Technology', 'Modern', 'Technology']
464
Video Games May Improve Surgery Performance
Video Games May Improve Surgery Performance Can video games make you better at your job? If you’re a surgeon, the answer appears to be “yes” Photo by JESHOOTS.COM on Unsplash Can video games make you better at your job? If you’re a surgeon, the answer appears to be “yes”. Gaming Improves Hand-Eye Coordination Several studies have found that playing video games can improve eye-hand coordination. Tan Ying Li and his colleagues conducted a study designed to see if playing a video game could increase hand-eye coordination in a group of medical students from Melaka Manipal Medical College in Malaysia. The researchers split the medical students into two groups. In the experiment group, participants were asked to play the smartphone game “Make Them Jump“. The other group didn’t play the game. The researchers tested hand-eye coordination by asking them to throw a ball at a wall with one hand and catch it with the other. The number of catches they made in 30 seconds was recorded. They recorded the data before and after the intervention. The results? The video game group caught a similar number of balls as the control group before the video game intervention — but they did better than the control group after playing the game. They also performed significantly better than their own performance before they had played the video game. What does this mean? It seems that, at least in some cases, video games can help improve eye-hand coordination. Video Games and Surgery Video games do not only seem to improve general hand-eye coordination, but they also seem to specifically improve surgeons’ ability to perform particular types of surgery. In one study, researchers found that individuals in a group that played a balance game on a mobile phone made fewer surgical errors than individuals in a group that didn’t play video games. These results were corroborated by another study: Participants that played a game on Nintendo Wii made fewer mistakes than those that didn’t. One other study found that individuals that were warmed-up with a video game performed surgery tasks faster than others that didn’t. Limitations of the Research One common limitation of the research in this area is that there tend to be very few high-quality studies. The ones that do exist are limited in that there usually are pretty small sample sizes. Because of this, the research as a whole is somewhat tentative in its conclusion about the utility of video games for improving surgery. Also, these studies used different types of games. It could be that some games are more likely to improve hand-eye coordination, and some are not. Strategy games, like online chess, may not have the same effect as the hand-eye coordination-heavy games used in these studies. Still, the research that exists suggests that there is an effect: video games seem to improve surgery performance to some extent. So the next time you’re being hassled about playing whatever game you’re playing, just explain that you’re training to be a surgeon. I’ll back you up.
https://medium.com/everyday-science/video-games-may-improve-surgery-performance-1f267361d59a
['Ramsay Lewis']
2020-05-21 21:53:15.980000+00:00
['Culture', 'Technology', 'Psychology', 'Gaming']
465
Bitnovo joins the Business Alliance for the Child Vaccine
Business Alliance for Child Vaccine, promoted by ‘la Caixa’ Banking Foundation Today we want to share a message of solidarity and cooperation, announcing that Bitnovo is already part of the Business Alliance for Child Vaccine, promoted by 'la Caixa' Banking Foundation. According to an ActionAid report, 8,000 children die every day in the world, due to predictable and easily treatable diseases. In Africa, infant mortality is 14 times higher than in developed countries. Thanks to the collaboration between the Obra Social "la Caixa" and Gavi, the international organization created to improve access to vaccines for children living in the poorest countries of the world, it has been possible to vaccinate millions of children over the past decade. children, reducing the number of premature deaths. La Caixa and its Obra Social promote the Business Alliance for Child Vaccine, offering Spanish companies to be part of this solidarity initiative, guaranteeing that their contributions will be used entirely for the vaccine of children, according to its promoters. Although, compared to the past, we have achieved surprising results that have increased the quality and life expectancy of millions of children, we will continue to make every effort in the development of the Alliance project, in the hope of achieving the goal that Gavi has set: reaching 300 million children between 2016 and 2010 to prevent 5 to 6 million long-term deaths. Any company can save the lives of more children. Our wish is that many more will join this important initiative.
https://medium.com/bitnovo/bitnovo-joins-the-business-alliance-for-the-child-vaccine-828dc2978755
['Roberta Quintiliano']
2018-05-17 08:25:00.338000+00:00
['Health', 'Business Partnerships', 'International Development', 'Partners In Health', 'Blockchain Technology']
466
How to Become Fluent in Multiple Programming Languages
Learn the statically-typed and syntactically-specific languages first. Nearly every article titled “Which Programming Language Should I Learn First?” suggests that Python is the perfect first language for someone to learn. While I agree that Python is a good first language due to its simple syntax and flexibility, I believe that several programming fundamentals won’t be learned that will be necessary later on. Therefore, I would offer some counter-intuitive advice: learn the statically-typed and syntactically-specific languages first. Instead of learning the language that will hold your hand and will offer you a comfortable path to wade into the world of programming, learn the language that will yell at you if you forget a semi-colon or will refuse to work because your data types are wrong. Statically-typed languages. Statically-typed languages (such as C# and Java) require a data type to be assigned to their variables. Data types include strings, integers, and Booleans, to name a few. Variables are bound to the data types they are assigned when they are initialized and can’t be changed, otherwise, errors will be thrown in the code before they run. Dynamically-typed languages on the other hand (such as Python) don’t require data types to be assigned at the initialization of the variable. The variable type will be checked to see if the types are correct at runtime (after the program has been compiled and run). By learning how to work with statically-typed variables, you get a foundational understanding of data types that you can build upon in the future. For example, JavaScript is a happy little programming language that has no problems adding together integers and strings. However, not being aware of datatypes can cause problems later on when bugs arise. By being strict with your learning and gaining a firm understanding of data types, you can save yourself a lot of headaches further on down the road when learning a dynamically-typed language with variables that can change datatypes on the fly. Additionally, learning a statically-typed language first will add a layer to your coding process logic. Syntactically-specific languages. When learning a new natural language, you first learn your alphabet, and then you go on to learning the words, the sentence structure, the punctuation, and the grammar of that language. Learning a programming language is similar, in that each language has a specific syntax you must use for the code to work. In other words, each programming language has a way in which it likes its sentences to be structured and its paragraphs to be formed. Some languages, (such as Java and C#) are very specific about their syntax and will refuse to work if you forget a single semi-colon (semi-colons are often put at the end of a line of code, similar to how a period is placed at the end of a sentence to signify its end). Other languages, such as Python, have relatively no syntax structure that must be followed. For instance, Python has relatively no semi-colons, and very few curly braces to organize its code compared to C#. Learning a syntactically-specific programming language first will give you a firm understanding of the proper structure you need to follow when writing code. By becoming aware of the proper indentation structure, the necessity to end your line of code with a semi-colon, and the requirement for your functions to be written within the confines of curly braces, you will be able to write code that is easily-understandable, more organized, and easier to maintain and debug. Furthermore, this will help give you a template with which to write your code when you begin venturing into new languages that aren’t as strict with syntax. In essence, learn the discipline now, and reap the rewards later.
https://towardsdatascience.com/how-to-become-fluent-in-multiple-programming-languages-9f473c146b90
['Madison Hunter']
2020-12-28 18:32:49.560000+00:00
['Technology', 'Education', 'Data Science', 'Programming', 'Coding']
467
Spotlight: Christina Morillo, Identity and Security Expert, Founder of #WOCinTechChat
Christina Morillo is an information security and enterprise identity professional, as well as a technical product manager. Her experiences across enterprise security and identity, insider threat, cloud identity programs and deployments have taken her to companies like Morgan Stanley, Fitch Ratings, AllianceBernstein, Microsoft, and currently Marqeta, where she is a Senior Security Product Manager. In addition to her professional work, Christina co-leads Women in Security and Privacy’s New York Chapter and volunteers with multiple organizations aligned with her mission of getting more women and underestimated folks in tech. In 2015, she also co-founded #WOCinTech Chat, the grassroots initiative best known for boosting visual representation through the open-source collection fo stock photos featuring women of color technologists. We spoke with Christina about her career path, work, and her advice for those currently navigating the path to a career in tech. Hi Christina! Tell us about yourself — how did you get to where you are today? Christina Morillo I was born and raised in New York City to Afro-Dominican parents. I grew up with my mother and her family. My uncle, my mother’s brother, was an electrical engineer and brilliant at Math. Thinking back, I remember being super fascinated by all of his STEM books, circuits and gadgets, I was always a curious child never afraid to tinker. After my high school years, I ventured off to college to study Information Technology & Network Administration. While going for my undergrad I started building my experience by working help desk and technical support roles. I gradually moved into desktop support and system/network administration. By the time I graduated, I had the necessary experience to land a job as a Jr. network/system administrator and was offered an information security role shortly thereafter. The rest, as they say, is history. I’ve been in the industry for quite a while. You are an information security, enterprise identity professional, and Senior Product Manager, Security. What first sparked your interest in tech? I’ve always had an insatiable curiosity about how computers, networks and all around technology works past the surface level. When I finally got to build my own PC, I fell in love with all the bits and bites. My college years really helped me bring the structure and foundational knowledge I needed to drive my career forward. It also brought community. Those were the early days of AOL, 14K/56K dial-up modems, so definitely not as advanced as today, but it’s where I learned how to navigate and appreciate the power of the internet. In addition to your work, you also co-lead the Women in Security and Privacy’s New York City chapter and you regularly volunteer with multiple organizations dedicated to supporting more women and people from underrepresented groups in the tech industry. How does community help foster a more inclusive tech industry? None of us were meant to go through the ups and downs of career and life all alone. While it is possible, it is extremely difficult. Finding a supportive community can really help you get through challenging times throughout your personal and professional life. In 2015, you co-founded #WOCinTech Chat, a grassroots initiative best-known for boosting visual representation through an open source collection of stock photos featuring women of color technologists. What inspired this project and how has the project grown over the last 5 years? When we were creating our website, I noticed that there were few stock images of women of color in professional and technical settings. At the time, the stereotypical photos available on Getty images or google were simply terrible. So, I decided that we should hold a shoot, made up of women of color who actually work in the tech industry, with the idea that we could open-source the collection via creative commons, and people all over the world could then use these images in all of their materials, websites etc. with the goal of increasing representation in tech. We didn’t want people to experience the same frustration we faced. The stock photos put a face to something many women of color in tech have been saying for years: “We are already here.” We were done with the excuses: it’s a pipeline issue and there aren’t enough qualified candidates. We didn’t want today’s problems to be put off to the indeterminate future, when the young girls of color in today’s pipeline finally enter the workforce. Because we knew if they were to enter the pipeline in the first place, they deserved to see who looked like them today. While the photos live on via creative commons with attribution, the initiative does not. We realized that diversity work is a full-time job and decided to pivot by focusing on growing our careers respectively, our families, and scaling mentorship in other ways. What are your thoughts on why diverse visual representation in tech is important to help change the status quo in the industry? If we don’t see proper representation of ourselves in/across tech, media and other aspects of society, how will the younger generation know that it is possible? What will give folks who look like us hope? While I am thankful that my journey was what it was, I look back and wish that I had seen the potential paths earlier. Seeing folks who look like you in places where you aspire to be, gives you the drive and motivation you need to strategically navigate these challenging spaces. What are some of the greatest opportunities you see right now to drive meaningful change when it comes to more equitable representation in the tech and startup worlds? The opportunities have always been present but have been ignored. While visual representation is important, and may help to combat biases, founders, executives and VCs really need to to get out of their own bubbles and pay attention. The reality is that diverse teams are great for business yet not enough effort is put into funding and investing in Black and Brown founders and talent.Inclusion is less about stats and more about listening to, investing in, promoting and retaining talent that pushes against the status quo. What advice would you give to those who are currently navigating the path to a career in tech? First, don’t compare your beginnings with anyone else’s middle. The person you think was an overnight success has really been years in the making. Instead, focus on and appreciate your path. There is no cookie cutter template because your experience will be custom to you, however, there is no quick route and you will need to put in the work. If you’re in college studying C.S., take advantage of the diverse set of programs offered at the school; ex. mentorship, sponsorship, internships, apprenticeships, even internal/external work study programs. Get as hands on as you can, experience is the only real teacher. Hone your communications skills (verbal and written) as well. A big part of working in tech revolves around communicating with people. Lastly, find your tribe, your community, the folks who will inspire you and help you navigate the muddy waters while pulling you up and out of them. Learning how to cultivate strong relationships will be your key to unlock many doors.
https://blog.thenounproject.com/spotlight-christina-morillo-identity-and-security-expert-founder-of-wocintechchat-d09a7e9e4e9a
['Lindsay Stuart']
2020-12-03 00:22:10.714000+00:00
['Spotlight', 'Leadership', 'Women In Tech', 'Diversity In Tech', 'Technology']
468
JavaScript Best Practices for Writing More Robust Code — More Ways to Find Items
Photo by Rebekah Howell on Unsplash JavaScript is an easy to learn programming language. It’s easy to write programs that run and does something. However, it’s hard to account for all the uses cases and write robust JavaScript code. In this article, we’ll look at ways to find items in JavaScript arrays. Finding If Something Exists in an Array Instance With the some Method The JavaScript some method lets us add a check if one or more items with the given condition exists in the array instance. It takes a callback with that has the array entry as a parameter and returns the condition of the item that we’re looking for. For instance, if we want to check if one more entry in the array instance is even, we use some to do that as follows: const arr = [1, 2, 3]; const result = arr.some(a => a % 2 === 0); In the code above, we have the arr array, which has one even number which is 2. Then we called some on it with the callback a => a % 2 === 0 to check if each entry is even, where a is the array entry that’s being checked. In this case, if some finds an array entry that’s even, which there is, then it returns true . Otherwise, it returns false . Therefore, some should return true . The callback can also take the index of the array entry that’s being checked as the 2nd parameter, and the array that’s being run on in as the 3rd parameter. They’re both optional. Therefore, we can rewrite our example as follows: const arr = [1, 2, 3]; const result = arr.some((a, index, array) => array[index] % 2 === 0); And we get the same result as before. some can optionally take a 2nd argument, which is the value of this that we want to reference in our callback function. We can pass in our own value of this as follows: const arr = [1, 2, 3]; const result = arr.some(function(a) { return a > this.min; }, { min: 1 }); In the code above, we pass in a 2nd argument, which is the object { min: 1 } . Then that’ll be the value of this in our callback, so we can use its min property to check if there’re some entries of the array that’s bigger than 1. Since we also use this in the callback, we have to use a traditional function to reference the value of this . Then result should be true since there are entries in arr that’s bigger than 1. This is more robust than looping through the array ourselves and checking for each entry with a loop as it’s well tested and used frequently, so the possibility of any bugs with this method is almost zero. Photo by Ramesh Casper on Unsplash find The array instance’s find method lets us find the first entry in the array that meets the given condition. It always searches from the start to the end of the array. find takes callback with the array entry as the first parameter of the callback and returns the condition of the array entry that we’re looking for in the array instance. For instance, we can use it as follows: const arr = [1, 2, 3]; const result = arr.find(a => a > 1); In the code above, we have the find method which takes a callback a => a > 1 to find the first item in arr that’s bigger than 1. a is the array entry that’s being looped through. Then we should get 2 returned since that’s the first entry in arr from the start that’s bigger than 1. The callback can also take an index of the array instance that’s being looped through as the 2nd parameter and the array instance as the 3rd parameter. However, they’re both optional. We can use them as follows: const arr = [1, 2, 3]; const result = arr.find((a, index, array) => array[index] > 1); In the code above, we retrieved the entry being checked by using the index to access the array entry instead of getting it from the parameter itself. So we should get the same result as before. Like some , the 2nd argument of the find method is the value of this that we want to reference in our callback. For instance, we can pass in an object and set it as the value of this in the callback as follows: const arr = [1, 2, 3]; const result = arr.find(function(a) { return a > this.min }, { min: 1 }); In the code above, we passed in { min: 1 } as the 2nd argument of find . Then we accessed the min property of the object in the callback by referencing this.min . Therefore, we should see the 2 as the value of result since this.min is 1 and we’re checking a to see if it’s bigger than 1. Again, this is an array instance method, so it’s less likely there’re bugs in the find method than writing our code if we want to find an item in an array. Conclusion The array instance’s some method lets us check if one or more entry of an array has an item with the given condition.
https://medium.com/swlh/javascript-best-practices-for-writing-more-robust-code-more-ways-to-find-items-f0fadcf4ed7e
['John Au-Yeung']
2020-05-06 15:13:55.165000+00:00
['JavaScript', 'Software Development', 'Programming', 'Technology', 'Web Development']
469
The role of Smart Meters in WePower’s quest to make the world greener
From the very beginning WePower’s vision has been focused on building a green energy trading solution that would accelerate the world’s transition to sustainable energy sources, through the power of digital technologies and communities across the globe. WePower is starting with a set of functionalities that will allow anyone to invest into new green energy developments. However, there is a much broader scope of services that WePower’s technology will enable as it scales. My name is Heikki Kolk. I am the Chief Engineer at Catapult Labs as well as WePower’s advisor on Smart Grid and Energy IOT topics. In this article I would like to share with you a short overview on what smart metering systems are and how they will help WePower bring energy services to the digital world. What Smart meters are and why we need them? Smart electricity meters are electronic devices that record consumption of electric energy in intervals of an hour or less, and communicate that information, at least daily, back to your utility. This relatively simple technology shifts the way your energy consumption is measured quite drastically. Instead of you reporting your energy usage every month to your utility company, and then getting billed based on an estimated energy cost, with smart meters your utility can calculate your real energy use on daily basis and bill you based on the energy price for every given day or even hour. There are a few other important benefits that a functioning smart metering system can provide: smart meters can help you connect the energy you purchase to its source, which creates an opportunity for a set of new services that will allow you to purchase energy from the source of your preference; smart meters provide a better understanding of how each of us uses energy and helps us find ways to optimize our usage habits, doing this can save money on your energy bills; with smart meters your utilities can better understand how the energy is produced and consumed at your city, state or country level. This helps them manage the system better and thus lower energy costs as well as identify system malfunctions much faster and fix them before the system fails. WePower IOT expert Heikki Kolk explains the role of Smart Meters in the WePower vision. Smart meters are currently more commonly found in industrial facilities, such as plants, data centers, warehouses and factories. However, many countries across the globe are working on a nationwide smart meter system roll out to cover the entire country. For example, the European Commission has set a target for member states utilities to reach 80% smart meter deployment for all customers by 2020 and to develop a business case to support the further adoption of smart technologies. Why smart meters are important to WePower? The basic smart meter functionality at the energy production sites will primarily be needed for a transparent energy production accounting and smart contracts. Smart meters will provide WePower the ability to transparently record the fact that you purchased green energy at a given time for a specific value, and create functional power purchase agreements in form of smart contracts, which you will be able to hold for a future date or sell them back into the marketplace. This is something that does not exist today in energy. However, with the growth of country-wide smart meter adoption, WePower will start introducing a full range of virtual utility services. Data from the smart meters will allow WePower to build data management and analytics tools that will help your utility providers manage the energy infrastructure much more efficiently. WePower will also be very well positioned to build a developer toolkit so that new services could be built on top of it with the same ease as mobile apps are today. What WePower proposes for the green energy marketplace is a powerful new tool that can give a whole new dimension to how people and organizations interact, invest, and trade with green power. Transition to green energy is needed more than ever. I am certain that with WePower we will get to the green world much faster.
https://medium.com/wepower/the-role-of-smart-meters-in-wepowers-quest-to-make-the-world-greener-e386da0caf2a
['Heikki Kolk']
2017-12-03 18:44:50.005000+00:00
['Smart Meter', 'Technology', 'Energy', 'Blockchain', 'Electricity']
470
Can Artificial Intelligence Make Us Safer?
Artificial Intelligence increases workplace efficiency, relieves us from the most mundane of tasks, and helps us achieve the best business results. But have you ever wondered if AI could make us safer? If not, now’s the time — let’s explore what AI can do for our safety. Technology fails… but no more often than people There’s one thing we need to be clear on from the outset. 100% fail-safes, in any form, do not exist. People fail. And yes, technology can fail as well. In the history of the world, we can find just about any situation in which technology has, at some point, failed. Police confirmed that 50-year-old Jeremy Beren Banner died while using the Autopilot system. Worse, the National Transportation Safety Board verified that the driver had turned on Autopilot just 10 seconds before the accident happened — while the Model X had failed to detect the driver’s hands on the steering wheel. On Tesla’s part, the company regularly reminds drivers that they must supervise Autopilot at all times. Moreover, Tesla claims that most of the serious accidents involving its Autopilot system are actually the result of ‘driver distraction.’ Still, it seems technology is failing us — again. According to the World Health Organisation , more than 1.25m people die each year as a result of road traffic accidents; While an additional 20–50 million are injured or disabled. ….whereas the total number of accidents involving cars that use AI technology is much lower than those caused by humans. Meaning? Meaning we need to re-consider whether machines that use AI really do pose a greater threat to our safety than we pose to ourselves? Google thinks not. Its autonomous cars claim to be “ 10x safer than the best drivers,” and “40x safer than teenagers “ — so the question is, ‘Why aren’t we prepared to trust technology?’ As with any new tech, there will always be the fear factor. Research shows that Artificial Intelligence is better than people at tackling various complex tasks. As such, we can harness AI to make us safer by taking the risk of human error out of the equation — and use it for the following tasks: 1. Enhancing cybersecurity and data protection Cybersecurity is one of the greatest challenges of our time. New malware appears every few seconds, and anyone responsible for IT security knows that information security must now be assessed through a ‘prism of risk.’ Some believe it has become impossible to protect our devices, systems, and data against every possible threat. Even Brian Dye, vice president of Symantec — a leading antivirus software developer — recently said, “It’s time to retire.” Retire because modern malware is smart. It can learn as it goes, and use AI to coordinate cyberattacks. But how can companies respond? There’s only one way. They must fight AI with AI. It’s not a surprise that many businesses first foray into the world of artificial intelligence is cybersecurity-related — as today, mechanisms that use AI (and tools that automate the prevention of cyberattacks) are the most effective. These are the types of AI mechanisms that make us safer: ones that can identify patterns and anomalies at scale, alerting us to cyber-attacks, then blocking the threat before it gets the chance to take hold. 2. Reducing human error The human element is a crucial consideration in workplace safety. We can’t hide from the fact: fatigue and stress contribute to many accidents. Therefore, one of the key advantages of AI is its ability to stay alert, focused, and relaxed at all times. Elsewhere, complex AI algorithms can help finance and accounting professionals carry out their day jobs, reducing manual work so they can focus on more important tasks — freeing headspace to not only minimize error rate but to improve efficiencies as well. AI could have an equally positive impact on employee job satisfaction: imagine the effect of automating routine tasks, giving your team more time to work on exciting projects that inspire and motivate, in turn helping a business grow and succeed. [ See also: How to Increase Accounting Efficiency Using AI Invoice Automation ] 3. Improving road safety As we mentioned at the start, vehicles that use AI can improve road safety. They can mitigate human error behind the wheel; remind human operators to drive more safely; suggest we take every precaution where necessary. Moreover, AI can help identify, and as a result, reduce dangerous situations, even teaching us to drive with more care. Take modern cars that use AI-driven solutions to: Remind drivers to follow specific traffic rules Maintain a safe distance between other vehicles Drive in the correct lane Yet, examples of using Artificial Intelligence to improve road safety don’t end there. Washington’s solution has been implemented in 20 cities across 8 countries: as it turns out, it has improved the safety of hundreds of once-dangerous intersections. 4. Supporting health protection and disease detection AI is already used in complex tasks like drug discovery: it makes the whole process more efficient. It also supports breakthroughs in medicine by helping doctors more precisely predict which methods and prevention strategies will work in which patient groups. It can even analyze outcomes based on individual genetic variability, the local climate, and a patient’s lifestyle. Now, AI is improving the productivity of radiologists — let’s look at an example. Researchers from Stanford developed an algorithm to assess chest X-rays for signs of disease. It can recognize up to 14 types of medical conditions, with one particularly surprising result: it’s better at diagnosing pneumonia than several expert radiologists working together. It uses AI to analyze patient data, which allows for better management of patient visits. The system reviews both structured and unstructured data from medical records and doctors’ notes, creating an accurate and comprehensive health portrait of the patient, thus allowing for more effective patient service and faster diagnosis. All told, AI is a handy medical tool — but we must remember: it can’t solve medical problems without human intervention. Medical care comes with tremendous responsibility. [ See also: How Technology Can Improve the Medicine: Machine Learning Methods Used to Detect Cervical Cancer ] Expert verification will always be a necessity. 5. Detecting crime Many technologies can help police detect and fight crime. AI sits chiefly among them because it’s able to forecast when and where a crime will be committed before the offense ever happens. ….but how’s that even possible? Thanks to PredPol: a company that uses and Machine Learning to predict the time and place of a potential offense. PredPol’s software analyzes existing data based on past crimes and predicts when and where the next crime is most likely to happen — and its system is successfully used in several cities across the US. 6. Tightening border security As a final application, AI can help tighten border controls and make gateways into your country more secure. Let’s look at an example: a project called iBorderCtrl, tested by the Hungarian National Police across Hungary, Latvia, and Greece. From September 2016 to August 2019, travelers were given an automated lie-detection test in which a virtual border control officer asked questions like: “What’s in your suitcase?” “If you open the suitcase and show me what is inside, will it confirm that your answers were true?” The system used AI to record travelers’ faces and analyze 38 micro-gestures, scoring each answer. If a traveler passed, he got a QR code that allowed him to cross the border; if he failed, the virtual officer asked more questions. Or, handed the traveler over to a human agent. AI is as close to a fail-safe as we can get… but. AI could soon impact every sector of our life. It will improve our safety, but we have to remember: we cannot rely on an algorithm to protect us in every instance. In some cases, we will need to leave the final decision to a human. Moreover, as we come to use AI more, we have to be careful to mitigate the risk of mistakes — and to avoid unintended, some might say apocalyptic, consequences.
https://medium.com/dlabs-ai/can-artificial-intelligence-make-us-safer-fc2bbec19665
['Przemek', 'Shemmy']
2020-02-10 16:53:52.425000+00:00
['AI', 'Safety', 'Artificial Intelligence', 'Philosophy', 'Technology']
471
Announcing the 2019 NYC BigApps Blockchain Cohort
Get tickets for NYC BigApps Blockchain Demo here! Blockchain is teeming with use cases for governments all across the world. We’ve seen successful pilots implemented in places from Delaware to Ethiopia. New York City, however, is a whole different ballgame. With a population of over 8.6 million, legacy systems have been kept in place to maintain the reliability and safety of information without interrupting public services. The bigger a company or organization gets, the harder it is to innovate and test emerging technology — large governments are equally affected by this. This year, NYCEDC and SecondMuse teamed up to explore potential use cases for blockchain in New York City. We first held need-finding workshops with government stakeholders from NYC Economic Development Corporation, NYC Department of Buildings, NYC Department of Information Technology and Telecommunications, and more to determine their most pressing challenges and establish where blockchain may help to provide meaningful solutions. From this, three Challenge Areas were selected–– Identity, Energy, and Real Estate Asset Management. Then, we asked the blockchain community: how would you use blockchain to address these Challenge Areas and improve the efficiency, security, and transparency of public sector services in NYC? This exploration is laying the early groundwork to introduce new technology to city government. The competition is nearing the end with 10 finalist teams preparing to present their answers to that initial question. These teams have identified and developed practical use cases for blockchain in civic applications and will present their solutions at our upcoming Demo Day. Please join us for an all-day event on October 3rd, 2019, where you will hear all 10 of the following companies pitch for mentorship, services, cash prizes, and more! Tickets are available here.
https://medium.com/@nycbigapps/announcing-the-2019-nyc-bigapps-blockchain-cohort-1b465268129f
['Nyc Bigapps']
2019-10-03 05:30:41.365000+00:00
['Blockchain', 'Government', 'Emerging Technology', 'Civictech']
472
Why Domino’s Pizza Was Ready to Give Free Pizza for 100 Years
The Campaign Domino’s Pizza launched a marketing campaign in Russia where people could get 100 free pizzas for 100 years if they tattooed themselves with the brand’s logo and posted those pictures on social media with the hashtag #DominosForver. To their surprise, there was quickly an abundance of photos of people with the brand’s tattoo on Facebook, Instagram, and VKontakte (a Russian social media platform). I wonder what they expected. They should have expected people to react this way. Here are some of those pictures posted by people: Source: Screenshot by the author on Instagram Source: Screenshot by the author on Instagram Domino’s Pizza clarified that only the first 350 people to post the tattooed images would get the free pizzas. The tattoos were also required to be at least 2 cm in length, and had to be in “visible parts of the body.” But it was too late already, and people didn’t stop posting such pictures. They kept coming non-stop. Even after the number of participants was well over 350, people still kept posting pictures on social media. It then had to post an urgent message for people who were going to get the tattoos. The message was: "An urgent message to all those sitting at the tattoo artist’s right now: We’ll include you in the list of participants, but we’re waiting for photos up to midday today." This campaign had been planned to be run for two months, but due to the overwhelming participation of people, it had to be ended in just 5 days.
https://medium.com/illumination-curated/why-dominos-pizza-was-ready-to-give-free-pizza-for-100-years-973fcc05815d
['Binit Acharya']
2020-10-09 13:14:22.681000+00:00
['Business', 'Marketing', 'Psychology', 'Technology', 'Social Media']
473
From Prototype to Product: Software Engineering in Tableau’s Early Days
At the premier visualization research conference this year, two of Tableau’s founders received a special recognition of their foundational research that “has stood the test of time” and was also the basis for Tableau. This is the story of how we turned the original research into a product and a successful company. The IEEE VIS conference is the largest forum for academic and applied research in visualization. It grants Test of Time Awards each year that honor work that has endured and is still relevant 20 years after it was published. This year, Tableau co-founders Chris Stolte and Pat Hanrahan, with their former colleague Diane Tang, received one of them for the research underlying Tableau — a paper titled Polaris: a system for query, analysis and visualization of multi-dimensional relational databases. The Polaris user interface with explanations from the paper. The paper laid out several key ideas: interactive specification of the visualization using a drag-and-drop user interface, the VizQL query language that described both the visualization and the data query, and live retrieval of the relevant data directly from a database (rather than the common approach of loading data files into memory). In 2003, Chris Stolte, Christian Chabot, and Pat Hanrahan founded Tableau on this work, to develop Polaris from an academic prototype into the company’s first product — Tableau Desktop. Academic prototypes are mostly meant to demonstrate an idea, not to withstand daily use by many different users with different needs, data, and environments. Turning the prototype into a product that could be shipped was not a trivial undertaking — technical and product challenges stood between these inventors and building a successful company. Dr. Chris Stolte accepting the VIS Test of Time award on behalf of his co-authors, Dr. Pat Hanrahan and Dr. Diane Tang I was Tableau’s seventh employee, jumping back into a developer role after leading engineering teams at another California-based startup (which is a story for another time). Today, I’m Tableau’s CTO, focused on looking at future technologies and product ideas that we could leverage to push our customer’s ability to work with their data to new heights. Product #1 In the early days, Tableau was a total unknown — nobody had heard of us, so why would they buy our product? One challenge from those early days was to remove barriers to people trying out what we thought was an amazing product from a new company. We had to meet our customers where they were, from a technology and deployment perspective, to make it easy for a data worker to adopt a new product. It was 2004 — the cloud was non-existent, data lived inside corporate data centers, and Apple had no presence in the enterprise. The hot technology was .NET, but even that required a bunch of upgrades to Windows and installed frameworks, and most businesses at the time just weren’t there. We chose to build the first version of Tableau Desktop using MFC — Microsoft Foundation Classes, a staid framework that could easily be deployed on any Windows platform running at the time, and for which developers were relatively plentiful. We also wanted to use the data technology that our customers used. Yes, this meant supporting the popular relational databases of the time, like Microsoft SQL Server, Oracle, Postgres, MySQL. But it also meant adapting the ideas in the original paper to support hierarchical databases (cubes), which spoke an entirely different query language called MDX. The relational and hierarchical worlds shared some ideas, but resulted in database platforms with very different sets of capabilities. That meant leveraging VizQL to abstract away as many of the details of the underlying data platform as we could (building the beginnings of a great data layer in the process), while letting some of the important differences shine through. But this abstraction wasn’t enough — when working with Oracle to build a connector to their database, they warned us about generating “SQL by the yard” and the performance problems it could cause. So we had to go from an abstract usage of a variety of different data platforms to specific, tuned SQL or MDX that would run well against the target platform. That meant learning those systems well, and encoding that knowledge into query generation systems. We had to know the best ways of getting metadata from the database that would help us both build a great experience and generate the best queries. Understanding the collation of the data allowed us to do important optimizations like translating enumerated filters to ranges, which streamlined our queries and made them more efficient for the database to execute. Formalisms and Visualization The formalism of VizQL and the data abstractions that we built did give us a nice way of ensuring that our rapidly evolving system kept running well — they provided an ideal interface around which we could build automated tests for anything that the entire system could produce, from visuals to abstract usage of the data platform to concrete queries that we would run. Because our VizQL pipeline not only built visualizations but the queries to go with them, we could automatically generate tests for any new database platform that we added. Of course, it turns out that not all databases will compute things exactly the same way on floating-point columns, so we had to accept that each database platform might have its own “right” answer. Testing the visualizations was a bit more challenging, as many of the graphics systems of the time weren’t guaranteed to generate the same output on every machine, which later led to lots of approximate comparisons and readable text-based representations of the visualization in the test pipeline. It wasn’t a complete solution to testing what would become a complex system, but it gave us confidence in the evolution of the system in those early days. Tableau would become known for its visualizations — they were dynamic, interactive, and beautiful. And we wanted them to be rich — it wasn’t good enough to show just a few bars or a dozen lines, or even a thousand points in a scatterplot. We wanted our system to let people throw lots of data onto the screen and be able to look at it and interact with it. The machines of 2004 that existed in businesses were not exactly ready for them, and technologies like OpenGL and DirectX weren’t yet a fit for us (though we would come back to these later). We spent a lot of time squeezing performance out of our MFC-based application to get a good balance between the quality and performance of our visualizations, letting people build visualizations with tens or hundreds of thousands of graphical marks. This included everything from trying to smartly pre-allocate the Windows resources we would use (because creating them on-demand was too slow), to creating “incremental” rendering systems that would both spread rendering of complex visualizations out in time, and render them in phases of increasing quality, so a quick version could be displayed to support high interactivity, and a high quality version delivered the beautiful output we desired. Though we have updated the look of the original prototype (left), we have continued to emphasize interactivity and the flow of analysis in the shipping version (right) Design as a first-class concern From the early days, we knew we wanted anyone to be able to learn these tools and work effectively with their data. That meant that we couldn’t rely on people having specialized knowledge — couldn’t rely on them writing SQL, designing good visualizations or color mappings, understanding geographic mappings, etc. Instead, we had to build in that knowledge. But we also had to appeal to the experts — anyone who showed up with these skills had to be able to put them to use. That means a fine line between building a narrower, guided experience for anyone, and a more open experience for people with more skills — and doing it in such a way that both audiences feel like the product was built for them. To achieve this, we focused on developing and applying a set of design principles that would guide us on creating amazing experiences around working with data, and we tested the design of new capabilities against these principles. These included incremental construction and immediate feedback, direct interaction, and having smart defaults. These led us to emphasize features that allowed someone to do something complex by building up simple, direct actions with good feedback at every step. This would let both experts and non-experts understand the impact of their actions. It allowed us to eschew the paired design/view approaches common in other applications of the time. We also had to ensure that “every intermediate state was meaningful when dragging around fields,” recalls Scott Sherman, another key engineer from those days, when thinking about interacting with a single-mode application. Flow We believed that people should get into a flow state when working with their data — that state that artists or musicians or great engineers describe when they are practicing their craft, when the tools that they use become extensions of themselves and essentially disappear. For a data worker, it would be that state of just being able to think about the data and the questions you want to answer or the goal you want to achieve. Flipping between design and view modes, common in the BI and reporting tools of the time, would directly interfere with achieving that. So, we strove to make everything in the application as live and direct as possible. These principles also pushed us to uncover ways to fit more analytical capabilities into this flow state — we wanted these to be interactive and direct, rather than a matter of configuration. These included the automatic construction of visualizations from data (ShowMe, in 2005), the single-click coordination of multiple views (dashboards, in 2007), the automatic geocoding of place names for display on a map (in 2008) and many more over the years. These experiences would end up changing the users of our products from customers to fans. And that brings us to now — nearly 18 years later. The original VizQL work is still the heart of our product, and the work we have done since then around building data platforms and design principles are with us every day as we continue to make great products to help people see and understand their data. We owe a great deal of thanks to founders Chris and Pat and their collaborator Diane for their groundbreaking work that turned into both an amazing set of products, and an amazing company.
https://engineering.tableau.com/from-prototype-to-product-software-engineering-in-tableaus-early-days-7599a87befed
['Andrew Beers']
2020-11-17 18:59:09.132000+00:00
['Tableau', 'Visualization', 'Research', 'Software Development', 'Technology Transfer']
474
How to Monitor Your Boat During the Winter Months
It’s summer time and for some of us that means sunshine, outdoors, and water. If you are lucky enough to live near an ocean or lake and you are a boat owner, that can also mean that it’s boat time. During the summer months it is much easier to monitor your boat because you are always on it. What happens when summer ends and you need a way to make sure things are sitting smoothly while you are away? Some boat monitoring systems can run you thousands of dollars and be a pain to install. What if there was a way to spend less and still be able to monitor your boat in real-time and get alerts when things go awry? This is a DIY solution using a Raspberry Pi, some electronics, and Initial State to monitor your boat, so you can have all the data when you are on it and when it’s in the slip. You can monitor temperature, battery voltages, shore power, and bilge pump cycles. This remote monitoring solution allows you to monitor your boat from anywhere in the world. It will give you a peace of mind knowing your boat is safe and secure. There are a lot of reasons why you would want to know what is going on with your boat in the slip. For example, let’s examine bilge pump cycles. As the name suggests, bilge pumps pump water. They push water out of the bilge (lowest point on the inside of the boat) so that your boat doesn’t sink. Water can get into your bilge a multitude of ways: spray while driving the boat, rain and storms, missing plug. Today most bilge pumps are automatic but what happens if one fails? Being able to monitor how often your bilge pump is cycling can give you assurance yours is still in working condition and is pushing the bilge water out. Let’s next look at shore power. Shore power is what you use for power when you have your boat docked and not running. This not only charges your battery but also lets you run AC powered devices on your boat. Constant shore power during the winter months is necessary to keep your batteries charged and Pi running. And if the power is out, who knows what else might be wrong. You can’t monitor your boat if you Pi is dead. Lastly, battery voltages and temperature are a no brainer. Is the temperature too hot or too cold? Is that going to affect your boat and the devices on it? Are your batteries dead and need to be replaced? This kind of general information is important to have a clear picture of the health of your boat. This project was implemented by the very smart and very creative John Poindexter. John was gracious enough to let me share this project with the world so please enjoy. Basics of What You’ll Need First things first, you’ll need a boat. The boat used in this project was a 1990 Hunter Passage 42 sailboat. While this project is tailored to this boat, you can modify what is documented to fit your boat. Raspberry Pi 3 & SD Card to collect and send data. Initial State to log data, view real-time conditions, and send alerts. RadioShack General Purpose Prototyping Board. KiCad for interface board layout or PCB design. Netgear AirCard 815S mobile hotspot for internet connectivity. A few miscellaneous electronic accessories for your interface between the boat inputs and the Raspberry Pi. *Note: This project is not for the faint of heart. This is one for those who like a challenge and enjoy building their own systems. When it’s all said and done, you’ll feel good knowing you built it yourself. The Details Raspberry Pi I recommend using a Raspberry Pi 3 or Raspberry Pi 4. It can connect to WiFi and send your data to a real-time monitoring solution. If your Pi isn’t already set up, you can follow the instructions on Raspberry Pi’s blog to get started. Raspberry Pi keyboard & monitor While not absolutely necessary to this project, a Raspberry Pi keyboard and monitor make it a lot easier to work with your Pi and get things done. The Sunfounder monitor and Logitech Wireless Touch Keyboard K400 were both used in the making of this project. For the keyboard to be compatible with the latest version of Raspbian, you’ll need to install Solaar. Solaar is a Python version of the Logitech Unifying software. Run the following in the terminal window of your Raspberry Pi: sudo apt-get install solaar After installation, you can find the Solaar app under the Accessories of the Pi as well as on the top menu bar. In addition to providing a pairing function it also shows the state of the battery. There are lots of options for Raspberry Pi monitors and keyboards. Any one you choose will be good and make working with your Pi easier. Mobile Hotspot This mobile hotspot is used since marina WiFi can be unreliable for continuous connectivity. Really any mobile hotspot will do, you just need something that allows you to be continuously connected to WiFi. Follow the directions for the device you buy to set it up. You’ll want to connect your Pi to the WiFi (whether marina or hotspot). Your hotspot will have 3 passwords: WiFi password, guest password, and admin login password. You can use the WiFi password to connect your Pi. Your hotspot does have a battery but to keep it continuously running you’ll want to leave it plugged in. This is also why shore power is so important. Without power your Pi and hotspot both die, which will require an in person visit to resolve. Initial State Initial State is a data visualization software that can be used for real-time data monitoring and historical data evaluation. While this is the data visualization platform that is used for this project, you can use any platform that allows you to send in data via an API. Initial State offers a free tier for students with an active edu email address, an individual tier for hobbyists and prototypers for $9.99/month, and an enterprise tier for businesses starting at $20/month. Every account is given a 14-day free trial to test out all features and functions on the platform. Once you register for an account, you can go to your settings and view your access keys. An access key allows you to send data into your account. You’ll use your access key in the Python script later in the article. SSH’ing & IP Address One of the issues that occurred while doing this project was that the Raspberry Pi IP address wasn’t static and changed frequently. This makes it difficult to remotely SSH into your Pi when you don’t know the IP address. There is an easy solution for this. The Pi Process tutorial shows you how to monitor your Pi’s processes and it’s IP address. You can send the IP address of your Pi to your Initial State dashboard. Knowing your Pi’s IP address makes it easy for you to remotely SHH into your Pi to access it. Initial State Pi IP address dashboard Interface Board You‘ll need to find a copy of your boat’s schematics. You can generally find this in your user manual or online. This will help you to understand the boat’s inputs. The interface board will be a connection between the boat’s inputs and the Raspberry Pi. You can use KiCad to design the layout for the connections.
https://medium.com/initial-state/how-to-monitor-your-boat-during-the-winter-months-d89f7d76a88d
['Elizabeth Adams']
2020-09-01 17:02:06.865000+00:00
['Technology', 'Programming', 'IoT', 'Makers', 'DIY']
475
Forbes 30 under 30: Oana Manolache — Founder & CEO of Introvoke
The Entrepreneurial Genes. She established a live streaming platform that she is expanding globally and she wants to then get more involved in social impact initiatives. She left Romania after high school to get her Business & Marketing Bachelor Degree at Coventry University in England that she graduated with First Class with Honors, and now she lives in Boston. Oana Manolache (29 years old) believes she inherited the entrepreneurial genes from her parents. “I think I was born with entrepreneurial genes, my parents also being entrepreneurs in Romania. I saw how much work is necessary to build something from scratch, but also the satisfaction that comes with the successful impact that your business can bring on the society and internally to the employee culture.(…) I always wanted to build a company that brings a positive change to people’s lives, to our customers, but also to our internal teams. We received a message from a user in Africa who thanked us for enabling him to access real time information from events and he therefore managed to accelerate his success in building a company in the US. Moments like these remind us why we built Introvoke”, says Oana Manolache. The young entrepreneur is the founder and CEO of Introvoke, a global live streaming platform, which became very attractive especially in the current environment. As a matter of fact, the live streaming industry is estimated to value $125 billion by 2025 according to statistics. “I started Introvoke due to a personal frustration that I validated with other entrepreneurs. I was invited to various events in New York and San Francisco and, although the information would have been so beneficial for my business, I could not afford to spend the time to travel. I started contacting these organizations and I learned that they were already looking for live streaming solutions, but no platform was solving their problem. Facebook, Instagram Live are mainly social media platform, not built specifically for live events, and other platforms were way too expensive and difficult to use, requiring live streaming technical experience. Therefore we soft launched Introvoke in May 2019. At the launch we had over 150 entrepreneurs, investors and experts, including a pitch competition that was live streamed on Introvoke”, says Oana Manolache. Introvoke has 2 main audiences: any organization that is organizing events or sessions, as well as viewers who are looking for live professional content. Over 35 organizations are currently hosting hundreds of live events on introvoke with over 10,000 users attending these events. “The traction came organically, but we are estimating a high exponential growth the moment our marketing campaign launches this spring. We built everything with deep feedback from our current customers. We launched the subscription model in March, but we decided to offer our plans for free until 1st May in order to help organizations go live quickly with their events and engage with their customers during this challenging time. We however have our hosts committed for the Elite subscriptions on the longer term.”, says Oana Manolache. The entrepreneur is planning for Introvoke to become the world’s main digital venue for any live or virtual event. She is also mentioning an exit strategy in the next 5–7 years depending on the company’s expansion, taking into consideration examples such as the acquisition of Twitch by Amazon “We have a few companies in mind that could be the ideal buyers of the Introvoke platform in the future”. “After the exit I will start another company with a social impact, maybe even non-profit. At a global level, 13% of people are undernourished, out of which 3 million children are dying of hunger. It’s a major difficult problem that has no solution so far.”, adds Oana Manolache. **Article translated from the original Forbes Romania article.
https://medium.com/introvoke-inc/forbes-30-under-30-oana-manolache-founder-ceo-of-introvoke-4487ae39b819
['Oana Manolache']
2020-05-05 01:00:22.663000+00:00
['Technology', 'Live Streaming', 'Entrepreneurship', 'Forbes 30 Under 30', 'Women In Tech']
476
Best of Modern JavaScript — Generator Methods
Photo by Jason Blackeye on Unsplash Since 2015, JavaScript has improved immensely. It’s much more pleasant to use it now than ever. In this article, we’ll look at JavaScript generators. The First next() The next call starts an observer. The only purpose of the first invocation next is to start the observer. So if we pass in a value to the first next call, it won’t be obtained by yield . For example, we can write: function* genFn() { while (true) { console.log(yield); } } const gen = genFn(); gen.next('a'); gen.next('b'); gen.next('c'); Then: b c is logged. The first call of next feeds 'a' to the generator, but there’s no way to receive it since there’s no yield statement. Once it runs yield , then the value can be received. yield ‘s operand is returned once next is called. The returned value would be undefined since we don’t have an operand to go with it. The 2nd invocation of next feeds 'b' into next , which is received by yield . And that’s logged with the console log. And next returned undefined value again because yield has no operand. Therefore, we can only feed data to yield when next is called. For example, we can write: function* genFn() { while (true) { console.log(yield); } } const gen = genFn(); gen.next(); gen.next('a'); gen.next('b'); gen.next('c'); Then we get: a b c logged in the console. yield Binds Loosely yield treats the whole expression that’s after it as its operand. For instance, if we have: yield foo + bar + baz; Then it’s treated as: yield (foo + bar + baz); rather than: (yield foo) + bar + baz; Therefore, we’ve to wrap our yield expressions with parentheses so that we can avoid syntax errors. For example, instead of writing: function* genFn() { console.log('yielded: ' + yield); } We write: function* genFn() { console.log('yielded: ' + (yield)); } return() and throw() The return and throw methods are similar to next . return lets us return something at the location of yield . throw lets us throw an expression at the location of yield . The next method is suspended at a yield operator. The value from next is sent to yield . return terminates the generator. For example, we can write: function* genFn() { try { console.log('start'); yield; } finally { console.log('end'); } } const gen = genFn(); gen.next() gen.return('finished') We have the genFn function that returns a generator with a given value. The next method will run the generator. And return ends the generator. If we log the value of the return call, we get: {value: "finished", done: true} throw() lets us Throw Errors We can throw errors with the throw keyword. For instance, we can write: function* genFn() { try { console.log('started'); yield; } catch (error) { console.log(error); } } const gen = genFn(); gen.next(); console.log(gen.throw(new Error('error'))); We call throw with the generator function. The catch block will be invoked once we call next to start the generator. Then we’ll see: Error: error logged with the console log of the catch block. And {value: undefined, done: true} is returned from the throw call. Photo by Jerry Zhang on Unsplash Conclusion We can call the next method and use it with the yield operator without an operand. This will take the value from next and return it.
https://medium.com/dev-genius/best-of-modern-javascript-generator-methods-e4eee6e002bb
['John Au-Yeung']
2020-11-05 18:37:55.198000+00:00
['Technology', 'Programming', 'Software Development', 'Web Development', 'JavaScript']
477
The Future of Higher Ed: the Smart Campus
We’re living in an ever-connected world thanks to the Internet; however, thanks to smart gadgets, we live in a world of continuous improvement. Smart gadgets linked to the Internet monitor many things from heart-rates to electricity consumption, or even when a package is delivered. This is the area of the Internet of Things (IoT). Smart technology changed the way we live our lives as information and entertainment is available 24/7. What we do with that type of information is key. We already see the rise of smart cities, in which urban areas use IoT devices and sensors to collect data from citizens and devices. This information is then analyzed to monitor and manage things like transportation, power, water, waste management, crime, and other community services. Yet, the adoption rate of smart cities is pretty slow due to the fact that cities have a very niche constituent base with specific needs and concerns. But campuses don’t have that type of limitation. In fact, the number of smart campuses is growing faster than smart cities as digital native students will expect to have online connectivity in their learning environment. Benefits of a smart campus Garnter defines smart campus is defined as “…a physical or digital environment in which humans and technology-enabled systems interact in increasingly open, connected, coordinated and intelligent ecosystems. Multiple elements, including people, processes, services and things, come together to create a more immersive, interactive and automated experience for students, staff, faculty and stakeholders of a university or college.” The benefits of a smart campus are shown through the reduction of operational costs and increased efficiency through smart lighting, security, transportation, and smart utilities. This includes reduced energy and water consumption, less traffic and parking issues, and better utilized space. These benefits result in both campus community satisfaction and real financial savings. For example, the University of Central Florida (UCF), will start scanning license plates of cars coming or going from campus to eliminate the need for parking passes and to check against police databases. In addition to their policing function, the university’s statement said the cameras would eventually eliminate the need for parking decals or hangtags, as drivers’ license plates “will act as parking permits.” Less obvious benefits result from a more connected campus culture. With new learning models and smart campus technology, every student has access to conferencing tools and can jump in on collaborative sessions within seconds. Screen casting and application sharing also enables students to work collaboratively without being physically in the same place. In that same sense of collaboration, digital signage, facial recognition, and smart cards can enhance students’ ability to move around the campus with ease, making it easier to accomplish routine tasks and transactions. And students do feel connected as a survey from TrouDigital found that 95% of students and recent graduates thought that ‘digital displays are beneficial for student communication’. “In many respects, digital signage is a mouthpiece for smart campuses,” explained TrouDigital’s Marketing Manager, Lee Gannon, “providing a unique physical platform that keeps the student body informed and feeling connected to their university.” An example of IoT use in matriculation can be seen from Arizona State University (ASU). ASU launched a pilot project to see if using IoT to take attendance could help advisers reach out to students.The university offers an introductory course called ASU 101 for first-time freshmen. “We know that students who fail to attend that course on a regular basis have a very high probability of failing to persist,” said CIO Gordon Wishon. “So we have a vested interest in understanding which students are attending that course on a regular basis and those who are not.” But with section sizes of up to 300 students, faculty members have no time to take attendance. With the approval of the institutional review board, the provost’s office is sponsoring a pilot project asking the students to opt in to allowing ASU to track their location using a virtual beacon when they enter the classroom. Wishon said the data would not go to the faculty member but could help the institution identify which students might be at risk because they are failing to attend. How to make your campus smart Universities should follow Amazon’s lead and streamline the consumer journey. How? Implement the most effective technology that has the greatest chance of improving the student journey. A prime example of implementing a plan to rehaul technology and applications can be seen at Oral Roberts University (ORU). ORU saw an all-time high in student retention, freshman fall-to-spring retention was 95.5 percent and sophomore fall-to-spring retention was 93.1 percent. ORU also saw an all-time high 99.4 percent placement of graduates. ORU followed this smart-campus approach and measured success by: Did the students receive direct value from what was integrated? Did we successfully white-label the technology and focus on the outcome of the overall system design, rather than the product name itself? Did we innovatively do something with the vendor product that their own development team did not think about doing? Bringing it all in Smart campus implementation can improve three key areas of campus life: the experience, efficiency, and education. Increasing interconnectivity between technology products will continue this trend by enabling students do more with their time while using fewer resources. It will also reshape how students interact with an institution by providing a connected student experience. It can be the tipping point for digital transformation that will address the future of learning and work. By leveraging the benefits of a smart campus, universities can easily modernize while remaining sustainable and relevant to their students, staff, and faculty, as well as the surrounding community they serve today.
https://medium.com/processmaker/the-future-of-higher-ed-the-smart-campus-1e06fe5120a6
['Matthieu Mcclintock']
2019-07-19 15:56:35.246000+00:00
['Smart Cities', 'Internet of Things', 'Higher Education', 'Technology', 'Edtech']
478
Different Country, Same Passion @ Scout24: Meet Javier Moro Sotelo
Javier Moro Sotelo is originally from the Spanish region of Galicia and made the move to Germany for a new work opportunity in 2013. Since then, he’s held various positions at the widely known, Groupon, and later joined us at Scout24 in 2017, now holding the role of VP of Platform Engineering. Javier Moro Sotelo, called Moro, has been working at Scout24 since 2017. Hi Moro, In your career so far, you have had many different experiences, including working in Spain as an Engineer for the world’s largest fashion retailer, Inditex, and moving from Spain to Germany for work. What made you choose to move to Germany? How has your experience been so far in Berlin? Moro: When my partner and I decided to move from Spain, we didn’t have a fixed destination. We got lucky, and we ended up in a position where we could choose where to move. What attracted us to Berlin was how eclectic, open, and diverse the city is. It doesn’t matter what you are looking for, you will find it in Berlin. It is a fantastic place to live in. You started at Scout24 as the Director of Engineering for Site Reliability Engineering. How did the recruiting process work for you back then? In hiring for your team now, what do you focus on and see as important qualities for candidates to have? Moro: When I interviewed with Scout24, something that fascinated was the marked focus on mindset and values. That still holds true today. We believe that skills like empathy, collaboration, and having a growth mindset are essential to unlock our full potential and achieve amazing things. You recently changed roles at Scout24, taking on the role of Vice President of Engineering. Please tell us more about your role and responsibilities. And what do you enjoy about your job and working at Scout24? Moro: As VP of Engineering, I lead the Platform Engineering organisation at Scout24. Therefore, I have the privilege to work with a diverse set of teams that focus on helping people boost their productivity through the use of technology. Our areas of focus are employee technology, developer productivity, data, and AI. One of the things I have enjoyed the most about my time at Scout24 is how natural it has become for me to take on new challenges that fall way outside of my comfort zone. Sure, it’s scary at first, but having a network of supportive peers provides enough confidence to navigate any challenge. Having the opportunity to grow into your role while you learn new things and see how your work makes an impact is tremendously rewarding. As you’re in a leadership position, what do you think distinguishes leadership at Scout24? Moro: Scout24 has a broad and diverse set of leadership styles, and we use that to our advantage to learn from each other and bring different perspectives to the table. One common denominator, however, is being people-centric. We — and not only those in a leadership position — genuinely care about people. From well-being to career development and personal growth. We believe in hiring amazing people, giving them responsibility, but also power, trust, and space to do what they do best. Working in the tech industry, how does Scout24 differ from other (tech) companies? Which areas do you feel Scout24 is leading the way? Moro: We are a digital company, and part of our mission is to bring the offline world online. Technology is at the heart of what we do. One aspect that keeps positively surprising new Scouts is how smooth things are during onboarding. Things just work right out of the box, independent of your role and even location. Our technology strategy enables us to work anywhere. That enabled us to smoothly transition to work from the safety of our homes and help flatten the curve at the beginning of the COVID-19 crisis. Describe working at Scout24 in 3 words :) Moro: Make it happen. If you're interested in moving to Germany to work at Scout24, check out the latest job opportunities on our website.
https://medium.com/scout24-engineering/different-country-same-passion-scout24-meet-javier-moro-sotelo-a43c7aac539d
[]
2020-07-09 09:59:18.440000+00:00
['Technology', 'Site Reliability Engineer', 'Scout24', 'Platform Engineering', 'Leadership']
479
Qavalo: How breakfast can transform your company’s productivity
by Albert Padin, Symph CTO It all started with breakfast, which I suggest everyone eat daily, but it wasn’t just the food that transformed a company’s productivity. I had a breakfast meeting with the executive team of Qavalo — Laurice (founder), JP (data scientist), and Carmela (manager). It began with an idea, and as with most things in the idea phase, we were not quite sure of what the end result was going to be. But I was looking forward to taking a look at their processes to see if there was anything that technology can do to help them monitor and increase their productivity. Qavalo provides medical documentation quality assurance services to home health businesses in the USA. Their team was responsible for ensuring that all their clients’ documentation is filled out correctly and completely. Attention to detail is critical. So I listened to the Qavalo team explain their meticulous QA process. I asked several questions along the way and dug into the details of each step of the process. I was probing and hoping to discover patterns of repetitive steps or tasks. We spotted one not too long into the meeting. 400 hours every month… The staff had to repetitively copy and paste information on every task into a Google Sheet file so that they can keep a record of the tasks they were working on. Individually, this didn’t take a lot of time, but cumulatively for their team, it quickly added up. On average, it took 2 minutes for each person to copy and paste all the information for one task. If there were 30 tasks for a person, that would be a total of 60 minutes consumed per day. That’s an hour! If you had a staff of 20, then that’s 20 hours spent on copying and pasting per day! That’s 400 hours every month.
https://medium.com/@symphco/qavalo-how-breakfast-can-transform-your-companys-productivity-1903547f6715
[]
2019-02-14 08:27:57.495000+00:00
['Productivity', 'Technology', 'Healthcare', 'Efficiency', 'Web Application']
480
Top UI UX Design Inspiration 134
Tips From Post Authors The conversion rates of your business should be the main focus with respect to your overall strategy. The point is you need to make sure that what you’ll be doing is for the purpose of increasing the rate of conversion. That is why you have to look for a web solution provider who can lead you to the right path. In this case, we’re considering the role of the User Interface Design Companies which will guide you to the achievement of great success. Increasing conversions can be done through a wide array of effective means and ways. The general strategy is to make the potential customers happy. How To Achieve This What an important part of web designing and development is how to achieve this. All the web providers will naturally recommend to use the available video or static layouts but this will not be a good idea because these designs will not be able to attract large number of users. Designed for the use of specific varieties of visitors, the success of the strategy will depend on whether the product or service is a useful and appropriate one. 1. Non-obtrusive elements The success of the user experience strategy will greatly depend on the appropriate use of non-obtrusive HTML5 and CSS3 elements which will end up in providing the users with the most suitable experience. This is a very important part of the main strategy for how you could have a successful conversion rate. 2. Make it easier The strategy will also ensure to make things easier to understand by the end users so that there is a great success. Making it easier for the end users is going to be a closely related strategy with the conversion logic. The capabilities of HTML5 and HTML5 technicalities will ensure the conversion effect is greatly enhanced. 3. Responsive Design Responsive design will make it easy to understand the platforms the end user will be using. Make it is easy for the users to adapt to the platforms the end using will be using. The main objective of the strategy is to ensure the end users are able to adapt very well to the diverse platforms the particular business will be running. 4. Empower users without compromising The most important part of the strategy is to ensure the end user of the business is empowered. The experience of the user should ensure that the user is happy and satisfied with the products or services. The main objective of the strategy is to make sure the immediate user and the ultimate user are empowered. 5. Acknowledge the input methods The facilities we’re providing to the end user will be a key factor in how we make sure they are well taken cared of. The facilities should be well appropriate for the end user without compromising the experience level. Using a variety of means, we will decide the best means to convey the message to the end user. 6. Provide flexibility The strategy should not limit the functionality of the customers and what they could be doing. The strategy should provide the users with the best experience of the product or service. The main objective of the strategy is to ensure that the end users understand the flexibility of the product or service provided. Conclusion We have seen that how important it is for companies to have a well planned strategy to ensure that everything runs in accordance to how it should be run. The point is you need to make sure that you have a great plan in place prior to starting anything. Make it potential for your web presence to be the best thing to happen to your business.
https://uxplanet.org/top-ui-ux-design-inspiration-134-d2353d5726d7
['They Make Design']
2020-12-15 09:24:29.667000+00:00
['Visual Design', 'Design', 'Inspiration', 'Technology', 'Art']
481
Meet Trove: An Emotional Health Platform for Managing Trauma Recovery
Meet Trove: An Emotional Health Platform for Managing Trauma Recovery I’m a product designer interested in helping individuals lead healthier lives. I participated in HackMentalHealth’s mental health hackathon to explore the ways in which technology can be used to improve mental health. I also wanted to learn what hackathons are all about (first-timer)! Overview In this post, I’ll discuss: Hack Mental Health (HMH), an organization bringing together leaders in mental health and technology Trove, a recovery community for survivors of sexual assault Our team’s hackathon process Who This past weekend, HackMentalHealth hosted Hack Mental Health x UCSF 2019. There were 500+ participants and 61 hackathon teams. Our hackathon team consisted of 6 lovely humans. Team Trove at Hack Mental Health 2019 What For this hackathon, we decided to hone in on the trauma recovery process. In many trauma scenarios, there are clear first steps to take immediately following the event. However, months or years later, there is little in the way of helping people understand how to fully integrate the experience into their life. Recovery from trauma is not a straight path. Challenges include: Treatment is not one-size-fits-all. For example, pre-trauma life experiences can affect how people respond to treatment. There are hundreds of treatment types, from the medically accepted to the spiritual. Take a look at The Body Keeps the Score, a book on the history of post-traumatic stress disorder, to get a sense of this. It can be difficult to discover the techniques that are available. How might we make recovering from trauma easier? Meet Trove: An emotional health platform for managing trauma recovery. Trove empowers trauma survivors by helping them discover useful techniques, map their process, and heal through sharing. The idea behind Trove comes from an inspiring woman and our team lead, Amelia. We were fortunate to help her bring one small part of her vision to life. A few select screens from the Trove app When designing for those in a sensitive headspace, we had to consider the following: How might we create a visual language that walks the line of inspiring and bright while holding space for the gravity of the traumatic event? How can we build trust around the idea that information is public to the community yet secure? How might we build feelings of empowerment around their recovery process? How We had a strong team and a well-thought-out idea. We broke off into pairs of specialists working together. Our process went as follows: Trove’s Hackathon Process Wrapping Up We were lucky to come in third place. We had a blast building the first iteration of Trove and will continue to build on the project. You can follow the project on Twitter. HackMentalHealth was an excellent way to get exposed to opportunities in mental health. Some of the projects that inspired me were about supporting sufferers of less common disorders using chatbots (see Trichy), addressing widespread compulsive disorders in novel ways (see Amazoff), finding better ways to track sleep, posture, and more. Outside of the hackathon, I learned about projects built out of pure empathy like Karuna Community and Quirk. I left the weekend feeling inspired by all the people helping us understand mental wellness. I hope to contribute to this community going forward. Want to continue the conversation? Shoot me a message at [email protected] :)
https://medium.com/hackmentalhealth/meet-trove-an-emotional-health-platform-for-managing-trauma-recovery-542882a10ba5
['Mel Smith Habibian']
2019-04-01 15:27:13.393000+00:00
['Hackathons', 'Mental Health', 'Health', 'Health Technology', 'PTSD']
482
To Market, or Not to Market Ethereum
To Market, or Not to Market Ethereum To market, or not to market? The question has been a hot topic for discussion in the Ethereum community as of late. From MarketingDAO opening for proposals to commentary from leaders throughout the ecosystem, everyone seems to have an opinion on whether or not — and how — to market Ethereum. In this article, I’ll share some of the opinions shaping this conversation and share my thoughts as a marketing professional who’s spent the last three years helping startups and their audiences understand decentralized technologies. Conflicting Opinions Ethereum Foundation community manager Hudson Jameson admitted that his opinion on marketing Ethereum has changed: Jameson goes on to say, “It’s normally the tech that appeals to the most people by getting their name out,” which is a great point. Developing new technology for mass adoption is as much of a social challenge as it is technological. Developers may work on obscure projects for large paychecks but the platforms they use when building their portfolios are probably the ones that will see some degree of longevity. In his blog, DeFi Dude explains: “I’m not against marketing Ethereum at this point. I’m also not directly for it either…There’s no use in explaining the greatness of DeFi and other concepts when the user would have a very hard time using it in the first place… This is also a fair point. Most dApps and DeFi products aren’t ready for users who aren’t already familiar with Ethereum, however, users are far from the only audience marketing efforts could be targeted at. Quantstamp community manager @safetyth1rd notes that progress on certain technologies within the Ethereum community aren’t always well-communicated to outside audiences. While I can’t speak to communication on the development of decentralized infrastructure, I’ve struggled in my work to encourage greater communication between dApp developers working to solve similar problems. Successes and Failures The best example of effective marketing in the Ethereum community I can think of was the campaign to establish DeFi (decentralized finance). A group of companies with different approaches to FinTech development got together, defined their common ground, and successfully promoted their brand and principles in a way the community has embraced. Lessons to take away from this: 1. Communication is key Taking the time to coordinate a collaborative approach helped amplify the message once it was ready for distribution. 2. Good marketing takes planning Efforts to market Ethereum need to be coordinated in an open, transparent process with defined goals and 3. Be inclusive Finding opportunities for meaningful engagement isn’t easy but embracing the input of eager supporters is the surest way to retain their attention and multiply your efforts. This brings me to MarketingDAO, an effort to fund projects that contribute to the marketing of Ethereum. When looking into participating, I found myself shuffled into a telegram group and asked to submit my proposals to a faceless entity with no clue as to what others might be proposing or what resources are available. I can only speak for myself but I know I wasn’t the only marketing professional confused and disappointed with what I experienced. Transparency and opportunities for engagement should be at the forefront of these efforts. Do Ethereans understand marketing? If we want to grow Ethereum to the best of our abilities, we need to be able to honestly assess our weaknesses. The Ethereum community has undeniable talent and been incredibly fortunate, both financially and in terms of the generosity of the community when it comes to supporting the development of decentralized technologies. I don’t know whether or not that statistic is true and I don’t believe Richard’s idea that “has money” equals “good community” but it’s obvious that an impressive number of early investors have made impressive sums of money on their cryptocurrency investments. That’s great for those individuals and we’re lucky that so many are generous in funding the development (technological and otherwise) of decentralized technologies but being well-funded isn’t the same as being able to build sustainable businesses. I don’t bring this up to pick on founders; but to emphasize the need to solicit input from professionals with all of the different skills necessary to build a business. Far too many startups stand in their own way by trying to reinvent basic business practices instead of respecting them. Different skill sets are needed to move a business forward. Why marketing matters It’s critical that we, the marketing professionals of Ethereum, take responsibility for the misconceptions spread by the less informed, less moral practitioners of our craft. Contrary to popular belief, marketing isn’t just about finding new customers for an existing business. Marketing is not about spam or “hucksterism” or deception. Good marketing is effective communication. There’s a good reason it’s often referred to as marketing and communications. Marketing starts with the conversation between founders and their first customers as a startup finding product/market fit and evolves as the company (hopefully) scales to serve larger audiences. It starts with a series of questions: Is there a market for a product? If so, how do we communicate with that market in an effective and scalable manner? If not, who can help us better understand the problem our product aims to solve and how can we tailor the product to better suit that market? Once a company understands who it’s serving, then it makes sense to start designing campaigns and finding ways to measure and optimize growth. What does this mean for Ethereum? In terms of promoting the growth of Ethereum, it means a few things: 1. Defining audiences I see a lot of discussions promoting the idea that developers are Ethereum’s primary audience. That makes sense but building sustainable businesses requires more than just developers. Ethereum needs marketing to connect and educate investors, designers, educators, marketers, and entrepreneurs on what Ethereum is, why it matters, and how to get involved. Our community could also do a better job of coordinating between startups working on dApps and decentralized infrastructure and groups working on “enterprise applications” like supply chains and self-sovereign identity. 2. Measuring growth Learning about your audience and producing helpful content are important parts of marketing but distributing content isn’t marketing if nobody’s keeping track of whether or not it supports the achievement of organizational goals. This is where metrics like the number of developers working on Ethereum, market capitalization, etc. come into play. 3. Being sustainable Sustainability is often discussed in the context of funding the development of open-source projects. Many projects largely rely on things like grants and donations with no plans for monetization but financial sustainability is an important element to consider. Facilitating donations to fund projects isn’t as sustainable as encouraging projects to find ethical paths toward monetization. Want to share your thoughts? I believe Ethereum could benefit from effective marketing. What do you think? Please feel free to reach out using the following channels: Email: [email protected] Twitter: @GoldenChaosGod LinkedIn: @asethgoldfarb
https://medium.com/@goldfarbas/to-market-or-not-to-market-ethereum-c503e9cfb959
['Seth Goldfarb']
2020-02-06 18:13:40.458000+00:00
['Decentralized', 'Ethereum Blockchain', 'Ethereum', 'Blockchain Development', 'Blockchain Technology']
483
Quantifying Ports with Planet Labs Data
Like kids in a candy shop we’ve been excited to dig into all the data supplied by Planet Labs’ Explorer Program. The promise of daily and even multi-day revisit rates for satellite imagery really opens the door on potential analytics. We believe that as Planet Labs continues to increase their flock size the up side of high revisit rates will really come to fruition. With that in mind we wanted to flex the Timbr.io platform to see what it could do with Planet Labs data. Chris Helm jumped all over it, and with Pramukta’s image processing skills they did some pretty amazing work. At least in my biased estimation. First, Chris created a Planet Labs metadata source that polls their API for imagery by source for a given bounding box or long/lat. Code for the Planet Labs source available (here) (Throughout this post the code for each step will be linked in the caption for the image. Building on awesome libraries like Rasterio, Scikit-learn, and OpenCV) To test out the new source we wanted a location with lots of activity, so we zeroed in on the San Diego Naval Yard. Our goal was to create a set of reusable methods that could be combined to extract ships from the yard and then provide the change in counts by dock over time. First we needed to create a pipeline to discover good image candidates for analysis. Using the Planet Labs source we could set up a query for our bounding box of interest that would grab currently available imagery as well as poll that source for new imagery as it becomes available. To make our analysis easier we’d like to set up a pipeline that preprocesses the data to get just what we want. For instance we can grab the thumbnails and do a bit of metadata and imagery analysis to identify cloud free imagery. Once the pipeline collected a set of metadata we created a snapshot of the data, and published it to our Jupyter notebook. This made our data available for discovery and analysis through the extension we created for Jupyter. Metadata available (here) From our pipeline collection of metadata we can then grab the imagery we want to build models with by using a transform in the Timbr.io repository. Code available (here) In the search results we can see several data transforms that could help us analyze our data. We may want to scroll through our imagery collection or clip it to a specific area of interest — like this collection of docks in San Diego. Code available (here) The first analytical method we will dive into is separating land from the water in the image so we can extract the docks and ships. We can separate the coastline from the image array by thresholding the image to create a binary land/water mask. Then we can apply a distance transform to both land and water, and extract a buffer where the two transforms meet. The output of this method gives us a clean buffer that we can use to extract docks and ships from multiple images. We can then take the union of the buffer from each of our images to get a clean mask of the persistent features in our area of interest (the docks). Code available (here) Now that we have an appropriate buffer mask we can start cropping the docks and boats from each image. This will give us a set of discrete units against which we can do our more sophisticated analysis. In order to isolate docks as unique features, separate from land, we can derive a topological skeleton of the buffer mask. By counting adjacent pixels and decimating regions with high pixel counts we can break the skeleton into a series of unconnected segments. Then we can take each segments and isolate the docks users segments’ orientation and solidity. Code available (here) As you can see in the cropped images, one of the trade offs with a high revisit rate is the lower pixel resolution of the image. At 3–5' per pixel we need to be a bit more creative with our methods to effectively extract ships to create counts. To do so we are going to create a dock/land mask that we can use to remove everything permanent from each image (land and docks). Next we can leverage a series of masking techniques that become unioned into a final “region_mask” that can be implemented to extract ships from docks, land, and water. The result is a clean segmentation of the docks from the ships and water. Code available (here) — dependencies with previous steps Now that we have a clean mask of each dock we can get down to the business of counting ships across an array of Planet Labs images. From our snapshot we have imagery ranging from 2013–2016 with a total of five time intervals. For each dock we can now create a count of ships across all the collected time periods. The result is a stacked bar chart showing the churn by dock. Code available (here) Further, we can take a look at the accuracy of our feature extraction algorithm by looking the specific results for a dock. For region three we can see the method worked pretty well just missing one ship in image four. Also there is the opportunity to explore the distribution of ship sizes across the Naval Yard. Do some docks serve bigger ships exclusively? To answer this question we calcuated the size of the contour areas for each extract and plotted them by dock. Code for this plot (here) The x-axis is the dock number and the y-axis is the size of the ships at that dock. While no docks look to serve big ships exclusively we do see that the larger ships do aggloermate around docks zero, four and six. This plots also opens the question of that the over all distribution of ship sizes is across all the data we collected. The result can be seen below. Code for this plot (here) The x-axis is the size of the ships extracted and the y-axis is the frequency of that size range occuring in the data. Not surprisingly we see that the majority of ships are small and a small minority are large across our time series. If you’d like to check out the whole analysis we have a Jupyter notebook here. To operationalize this work we can have our pipeline polling for new images in our area of interest then updating this analysis programmatically with the latest appropriate image from Planet Labs. This allows the analysis to be a proactive service instead of a post mortem. We think this really shifts the potential for how analysis can be leveraged for business applications. For our next post in this series we’ll dig into the reusability of the method and associated transforms. Specifically, we’ll take our now nicely packaged algorithms and apply them to another port, and quantify how much tuning is needed to rinse and repeat the analysis across the globe. To sign up for early beta access to the platform hit us up here.
https://medium.com/planet-stories/quantifying-ports-with-planet-labs-data-13bc5782d867
[]
2016-04-26 14:37:58.240000+00:00
['Data Science', 'Satellite Technology', 'Algorithms']
484
Microsoft Flight Simulator Can Now Be Played in Virtual Reality
Image: Microsoft Windows Mixed Reality, Oculus, Valve, and HTC virtual reality headsets are all supported. By Matthew Humphries Microsoft Flight Simulator is one of the highlights of 2020, and the game just got even better for anyone who owns a virtual reality headset. Jorg Neumann, Head of Microsoft Flight Simulator, announced yesterday that Flight Simulator can now be played using VR headsets on PC. You may remember Microsoft put out a call for virtual reality beta testers back in October, and clearly found some. Neumann singles out the flight sim community as being “a very active and insightful partner in shaping how the team approached VR.” VR comes as a free update for the game and Microsoft is trying to support as many headsets as it possibly can. So far, the game will work with most Windows Mixed Reality headsets, including the HP Reverb G2, as well as all Oculus, Valve, and HTC headsets. If you already have one of those headsets, all you need is the latest update for Flight Simulator in order to start playing with a whole new level of immersion. Xbox gamers will look on jealously, knowing they have to wait until next summer to play the game (without VR). Meanwhile, PC gamers may still be waiting for their flight sticks to ship due to both demand and the pandemic limiting supplies of most peripherals this year. Virtual reality headsets are a little easier to find, although Valve’s Index headset currently has a shipment time of “8 or more weeks.”
https://medium.com/pcmag-access/microsoft-flight-simulator-can-now-be-played-in-virtual-reality-4f2882318e6a
[]
2020-12-24 19:02:15.845000+00:00
['Microsoft', 'VR', 'Gaming', 'Technology']
485
Opinion: Why traditional knowledge — not external tech — is the key to truly sustainable agriculture
Opinion: Why traditional knowledge — not external tech — is the key to truly sustainable agriculture Substituting organic “bio-inputs” for synthetic agrochemicals is still a one-size-fits-all, technology-focused solution, which means it won’t lead to sustainable agriculture Illustration by Sean Quinn By Nathan Einbinder and Helda Morales for Ensia | @ensiamedia The idea that our current agricultural and food system needs adjusting isn’t exactly revolutionary these days. In fact, many scientists and others believe that it could use an entire overhaul. After decades of technological advances focused on grain production and the development of synthetic inputs, there is finally recognition that the benefits — higher crop yields and increased food supply — also come with side effects. These include widespread soil and water contamination, human displacement from the expansion of large-scale monoculture farm operations, health impacts including diabetes, and heavy reliance on fossil fuels, among others. The solution to these problems, as suggested by the United Nations’ Food and Agriculture Organization, is to transition to sustainable agriculture. Hardly a novel concept, sustainable agriculture is something indigenous groups have been developing and practicing for eons. Yet it wasn’t until the early 20th century, at the advent of industrialized agriculture, that visionaries such as Eve Balfour and Lord Northbourne began to popularize the term through their work confirming the importance of diversity, ecological knowledge and a strong human/nature connection, as well as the value of small-scale family farming, which, despite the popular misconception that industrial systems are necessary to feed growing populations, continues to produce most of our food with fewer resources and less harmful impacts than the industrial model. Click here to subscribe! While it’s encouraging to see the latest wave of interest to transform the way we farm and eat — thanks in part to growing awareness of climate change (agriculture currently produces roughly 11% of global greenhouse gas emissions, rising to as high as 29%taking into account the entire food system) — the increasingly simplified version of sustainable agriculture currently being sold to the public and to farmers is concerning. Specifically, we fear the growing trend of “input substitution” — that is, the mere swapping of chemical products, usually fertilizers and pesticides, for those that are organic and therefore considered less harmful and more “sustainable.” Don’t get us wrong: We support action that motivates farmers and consumers to make the switch from conventional to organic. Still, it is high time to distinguish what is truly sustainable from what is just another spin on the one-size-fits-all, technology-focused solutions that got us into this mess in the first place. Perspectives From the South For more than a decade we have been documenting the effectiveness of traditional practices by indigenous farmers in Guatemala, while also working on issues related to sustainability and development. As in other areas of Central America, the families we work with confront serious challenges to maintain their heritage as campesinos. Drought is increasingly frequent and severe; the political economic context is hostile, at best; and the consequences from the violence of the 1980s, when entire villages were razed by the army and paramilitaries in the so-called war against communism, linger on. As a means to improve food security, as well as tackle other issues related to cultural and environmental restoration, many residents have turned (or returned) to agroecology — a set of ancestral and sustainable practices and principles that include the use of heritage crops; seed saving; soil conservation measures, such as reduced or no tillage; and the production of composts from local organic material. The theory is that through crop diversification, redesign and the ecological use of local resources, families will become more resilient and less dependent on external inputs and aid while simultaneously protecting the environment. Don Cristobal, a farmer and community leader in Pacux, Guatemala, stands in a diversified parcel with trees and traditional milpa, an ancient polyculture system of corn, beans and squash, along with a host of other native edible plants. Photo courtesy of Nathan Einbinder Despite continued skepticism about the potential of agroecology as a viable alternative to the industrial model — particularly by the corporations that are bound to lose if it is taken to scale — a growing body of research shows that it works. While local interest in agroecology is notably high (the corn-beans-squash milpasystem, developed in the region thousands of years ago and still used today, is considered one of the world’s finest examples of agroecological production) community associations that promote it struggle to compete against national and international programs that gift agrochemicals and hybrid seeds, which, despite bumper harvests in the first year, cannot be re-collected due to their sterility or patents, and are reported to us by local farmers as intolerant to drought. The logic behind these programs, aside from supporting a multibillion-dollar industry that relies on new customers, is to provide a quick fix to problems related to lack of soil nutrients, as well as to cut labor costs and reduce physical work in general. Simplified “Organic” Alternatives While substitution of organic “bio-inputs” minimizes — at least as far as we know — environmental harms associated with synthetic agrochemicals, externally produced organic bio-inputs promise once again quick and easy results, with many of the same consequences. Native corn varieties such as those shown here are adaptable to drought and come from seeds that were passed down from this farmer’s parents and grandparents in Pacux, Guatemala. Photo courtesy of Nathan Einbinder In one noted program, a Guatemalan company has teamed up with USAID to manufacture bio-stimulants, fertilizers composed of microorganisms that improve soil quality. As with previous programs, these inputs are gifted to farmers, and the goal, according to the company, is to turn the country into a so-called “biotechnology hub,” while simultaneously encouraging family producers to shift away from using petroleum-based inputs such as synthetic pesticides and fertilizers — especially in the production of export crops, where overuse has cost companies millions of dollars due to unacceptable residue levels found by importers. Truly Sustainable Agricultural Systems As we previously mentioned, we’re in favor of strategies that reduce farmers’ dependence on synthetic agrochemicals as well as those that promote healthy microbial interactions in the soil — two interconnected elements, among many, that constitute sustainable farming. So, what makes these inputs so incompatible with truly sustainable agricultural systems? To begin, as suggested by sustainability expert Jules Pretty, the foundation of any sustainable agricultural system is effective and ingenious use of local resources by local (small-scale family) farmers. For this to occur you need strong social capital, intimate knowledge of local ecology and continuous innovation. Sustainable agriculture relies on the use of local resources by local, small-scale family farmers. Here, a group of small-scale farmers teach sustainable low-impact and indigenous methods to other indigenous farmers in Guatemala. Photo courtesy of Nathan Einbinder The problems with introducing “bio-inputs” into the communities are displacement of local practices and the risk of creating new forms of dependence. Just as the introduction of agrochemicals (and the miracle solutions it promised) disrupted local agroecological processes 40 years ago, the organic equivalents continue this cycle, displacing local knowledge of how to maintain soil fertility through careful management of organic material; how to avoid pest outbreaks through intercropping and knowledge of soil types and microclimate; and how to use other interrelated practices that, among the families we work with, involve a deep connection to the land and its stewardship. If sustainability is the true objective, we need to focus, not on new technological fixes, but on the recovery of local, time-tested agroecological practices. We need to empower those who hold traditional knowledge, support local groups, and introduce new techniques only when easily appropriable and harmonious within surrounding nature and customs. Only when this happens will we see a shift from the problems that come when external actors, actions and products disrupt farmers’ traditional ways of land stewardship. Only when we once again start valuing agriculture as a system that is part of a larger social system, with all the history and tradition that comes with it, will we see a shift to truly sustainable agriculture. Editor’s note: The views expressed here are those of the author and not necessarily of Ensia. We present ​them to further discussion around important topics. ​We encourage you to respond with a comment below following our commenting guidelines, which can be found on this page. ​In addition, you might consider​ ​submitting a Voices piece of your own. See Ensia’s Contact page for submission guidelines. Originally published at ensia.com on July , 2019.
https://medium.com/ensia/opinion-why-traditional-knowledge-not-external-tech-is-the-key-to-truly-sustainable-ff1b8c14733c
[]
2019-07-15 14:36:01.531000+00:00
['Technology', 'Sustainable Agriculture', 'Agriculture', 'Chemicals', 'Sustainability']
486
Data Literacy — A New Skill to Fuel Career Growth, Especially in a Non-Tech Company
Data Literacy, the Evolution and Demand for Different Kinds of Data Skills Over the past decade, we have seen an evolution of data skills. In the early days of data science, many companies looked for candidates with strong programming skills such as R, Python, and SQL, with good fundamentals in mathematics or statistics. During that period, programming and statistics was a clear differentiator. As more people jump onto the bandwagon of coding and statistics, those skills have become a commodity today. In addition, many of the algorithms and technologies are embedded into data platforms, which further lowers the barrier to entry into the data science arena. Today, companies are looking for a different set of data skills: Employees need to understand the workflow of data mining and be able to challenge the output of algorithms, not just simply accept and assume that system decisions are always right. In the near future, every employee in every functions within an organization should possess some data-related skills A recent study conducted by Harvard Business School found that many data teams are not suffering from a lack of technical skills; rather, they lack skills in data-driven problem-solving. I can clearly identify and empathize with the above study, especially in a commercial (non-tech) organization. Some of the skills that are lacking in commercial and data professionals are: Commercial professionals 1. Know how to ask the right questions about the output of algorithms 2. Understand which data is relevant for their business, separating lagging and leading indicators 3. Design A/B tests and conduct structured experiments to test their hypothesis 4. Know how to interpret the analyzed data from the data team and link it back to their business 5. Create data-driven, compelling stories and business cases to communicate to business leaders Data professionals 1. Know how to ask the right business questions 2. Translate business problems into a data mining problem with reasonable assumptions in place 3. Sound business and financial acumen to run analysis that is relevant for the business 4. Create easy-to-digest presentations and visualizations for business leaders to understand What Can You Do to Improve Your Data Literacy? Digital leaders will emerge from enterprises that embrace and extend analytics throughout their organizations. However, low data literacy is holding them back. It is time to get serious about improving our data literacy. Just working on and rolling out data projects does not equal being data literate. A data scientist also does not necessarily mean that he or she is data literate. Recognize that data literacy is important for your professional career growth A quote from Jim Rohn: “If you really want to do something, you’ll find a way. If you don’t, you’ll find an excuse.” It is important to deeply recognize that data literacy has become very important, not just for data professionals but for everyone. Based on the earlier discussion, you also know that companies, especially non-tech companies, need more people with the ability to interpret data, draw insights, and ask the right questions for the business. Data-driven decision-making markedly improves business outcomes. Data literacy is not exclusive to data professionals. It is the new core business skill that future business leaders or managers should possess. While data literacy is a skill that anyone can develop, it takes time, persistence, and practice. Unless you are convinced that data literacy can help you become a better employee, manager, or leader, you’ll find excuses to delay your learning. Continuously learn and pick up that technical skill if you need it You may not need to be a master in analytical tools. However, to have fruitful discussions, ask meaningful data-related questions, and perform certain data-related tasks, you must “know enough”. Extracting insights from a vast amount of data is a great challenge for many employees. According to a study by Accenture, an eye-opening 74% of employees reported feeling overwhelmed or unhappy when working with data. This has a negative impact on their performance: 36% of those overwhelmed employees spent at least one hour a week procrastinating on data-related tasks and found alternative methods to complete the tasks without using data. Another 14% avoided the task entirely. Marketing managers can assess their data literacy level when they understand at high level the flow of the analysis and feel confident enough to challenge the output of the analysis from whatever algorithms Excel is still the most common and useful tool for data analysis. If you want to feel in control of your data or get familiar with the data you have, you should learn Excel. You can refer to this infographic for the 100 most useful features of Excel and learn those that are relevant to you. For those who want to challenge themselves a little more, I recommend taking a look at courses from Udacity, Coursera, or DataCamp. Improve business and financial acumen Business acumen is the ability to understand business situations in order to make good judgments or decisions for the company, while financial acumen is the ability to evaluate the impact of business decisions on financials in the short and long term. Data professionals can measure how data literate they are when organization is confident that they can be the second lineup as marketing manager in the commercial unit Business leaders faces trade-offs in every business decision: balancing short- and long-term gains, tangible and intangible gains, etc. Analyzing the data with a solid understanding of the business and the financials will make your analysis much more relevant to your senior business leaders. The best way to improve your business and financial acumen is to ask for job rotations within the company, if possible. If you are a data professional, don’t be afraid to move to the commercial department and learn more about business and vice versa. If job rotations are not possible, then look for cross-functional project opportunities that allow you to work side by side with your business or data colleagues. Get the real feedback that you need in your analysis Have you been in situations in which you thought that senior leaders loved your presentation and analysis but your recommendations were not implemented? If that is the case, you might need to get a senior leader who is willing to invest time into sharing with you the real feedback on your analysis or presentation. That direct feedback will allow you to understand where and how you can further improve on your presentation or analysis. I believe only people who truly wants you to improve will give you candid feedback I hope that the above discussion is good enough to show that data literacy will be a very important skill for future leaders and managers. For those who are preparing for data science interview in a non-tech company, I have prepared a video on what data science manager look for in a candidate in this link. The opinions expressed in this article are my own and I do not represent or speak on behalf of any organization. If you enjoyed reading this and would like to have a conversation on this topic, please feel free to reach out to me on LinkedIn or Instagram.
https://medium.com/technology-hits/data-literacy-a-new-skill-to-fuel-career-growth-especially-in-a-non-tech-company-6232b9cde542
['Andy Teoh']
2020-12-11 04:20:03.967000+00:00
['Technology', 'Marketing', 'Data Science', 'Careers', 'Career Development']
487
Master the JavaScript Interview: What is a Closure?
“Master the JavaScript Interview” is a series of posts designed to prepare candidates for common questions they are likely to encounter when applying for a mid to senior-level JavaScript position. These are questions I frequently use in real interviews. I’m launching the series with the $40k question. If you answer this question wrong, there’s a good chance you won’t get hired. If you do get hired, there’s a good chance you’ll be hired as a junior developer, regardless of how long you’ve been working as a software developer. On average, junior developers get paid $40k/year less USD than more experienced software engineers. Closures are important because they control what is and isn’t in scope in a particular function, along with which variables are shared between sibling functions in the same containing scope. Understanding how variables and functions relate to each other is critical to understanding what’s going on in your code, in both functional and object oriented programming styles. The reason that missing this question is so disadvantageous in an interview is that misunderstandings about how closures work are a pretty clear red flag that can reveal a lack of deep experience, not just in JavaScript, but in any language that relies a lot on closures (Haskell, C#, Python, etc…). Coding in JavaScript without an understanding of closures is like trying to speak English without an understanding of grammar rules — you might be able to get your ideas across, but probably a bit awkwardly. You’ll also be vulnerable to misunderstandings when you’re trying to understand what somebody else wrote. Not only should you know what a closure is, you should know why it matters, and be able to easily answer several possible use-cases for closures. Closures are frequently used in JavaScript for object data privacy, in event handlers and callback functions, and in partial applications, currying, and other functional programming patterns. If you can’t answer this question, it could cost you the job, or ~$40k/year. Be prepared for a quick follow-up: “Can you name two common uses for closures?” What is a Closure? A closure is the combination of a function bundled together (enclosed) with references to its surrounding state (the lexical environment). In other words, a closure gives you access to an outer function’s scope from an inner function. In JavaScript, closures are created every time a function is created, at function creation time. To use a closure, define a function inside another function and expose it. To expose a function, return it or pass it to another function. The inner function will have access to the variables in the outer function scope, even after the outer function has returned. Using Closures (Examples) Among other things, closures are commonly used to give objects data privacy. Data privacy is an essential property that helps us program to an interface, not an implementation. This is an important concept that helps us build more robust software because implementation details are more likely to change in breaking ways than interface contracts. “Program to an interface, not an implementation.” Design Patterns: Elements of Reusable Object Oriented Software In JavaScript, closures are the primary mechanism used to enable data privacy. When you use closures for data privacy, the enclosed variables are only in scope within the containing (outer) function. You can’t get at the data from an outside scope except through the object’s privileged methods. In JavaScript, any exposed method defined within the closure scope is privileged. For example: Play with this in JSBin. (Don’t see any output? Copy and paste this HTML into the HTML pane.) In the example above, the `.get()` method is defined inside the scope of `getSecret()`, which gives it access to any variables from `getSecret()`, and makes it a privileged method. In this case, the parameter, `secret`. Objects are not the only way to produce data privacy. Closures can also be used to create stateful functions whose return values may be influenced by their internal state, e.g.: const secret = msg => () => msg; Available on JSBin. (Don’t see any output? Copy and paste this HTML into the HTML pane.) In functional programming, closures are frequently used for partial application & currying. This requires some definitions: Application: The process of applying a function to its arguments in order to produce a return value. Partial Application: The process of applying a function to some of its arguments. The partially applied function gets returned for later use. Partial application fixes (partially applies the function to) one or more arguments inside the returned function, and the returned function takes the remaining parameters as arguments in order to complete the function application. Partial application takes advantage of closure scope in order to fix parameters. You can write a generic function that will partially apply arguments to the target function. It will have the following signature: partialApply(targetFunction: Function, ...fixedArgs: Any[]) => functionWithFewerParams(...remainingArgs: Any[]) If you need help reading the signature above, check out Rtype: Reading Function Signatures. It will take a function that takes any number of arguments, followed by arguments we want to partially apply to the function, and returns a function that will take the remaining arguments. An example will help. Say you have a function that adds two numbers: const add = (a, b) => a + b; Now you want a function that adds 10 to any number. We’ll call it `add10()`. The result of `add10(5)` should be `15`. Our `partialApply()` function can make that happen: const add10 = partialApply(add, 10); add10(5); In this example, the argument, `10` becomes a fixed parameter remembered inside the `add10()` closure scope. Let’s look at a possible `partialApply()` implementation: Available on JSBin. (Don’t see any output? Copy and paste this HTML into the HTML pane.) As you can see, it simply returns a function which retains access to the `fixedArgs` arguments that were passed into the `partialApply()` function. Your Turn This post has a companion video post and practice assignments for members of EricElliottJS.com. If you’re already a member, sign in and practice now. If you’re not a member, sign up today. Explore the Series Updates: July 2019 — Clarified intro to explain why answering this question wrong could cost you a job or a lot of money in salary.
https://medium.com/javascript-scene/master-the-javascript-interview-what-is-a-closure-b2f0d2152b36
['Eric Elliott']
2020-08-26 01:12:04.085000+00:00
['Functional Programming', 'JavaScript', 'Programming', 'Technology']
488
‘When you see kids have that aha moment, you know you’ve got them for life’
How did you get into tech? I moved to New Zealand from England around 13 years ago and started temping as a receptionist in a tech company. When they called about the job, my first thought was: oh god, that sounds terrible. They’re all going to be socially awkward and boring. It was classic stereotyping. But on my first day, the lift doors opened and the first thing I saw was the Office Manager in a fit of giggles because the General Manager had said something funny. It wasn’t what I expected at all. That’s when I thought, you know what, I think I was wrong. This is going to be good fun. That job gave me a really good foundation in tech and opened my eyes to the possibilities that the industry offers. From there, I moved to another software company and made my way from receptionist to office management and then into events and outreach. When I saw the tech outreach and engagement role at Xero, it was perfect because it combined my newfound love of tech with other areas I’m passionate about, like giving young people access to education and inspiring people to achieve career success. What does your role involve? I do tech outreach, which is almost like long-term recruitment. So I engage with schools and educate students on what it’s like to work in tech, showing them that it’s not just about coding. The goal is to encourage more students to pursue a career in STEM, because we are going to need a lot more people joining the sector in the years to come. I also go to career events, run workshops for kids — anything that will show young people what opportunities there are if they work in tech. The engagement part of my role is about supporting our tech teams within Xero, so they can all come together and share ideas, learnings and inspiration. We hold Xed talks (a play on TED talks), run hackathons, host meetups for the tech community, and bring teams together from across the business to work in really creative and collaborative ways. As a global tech company, it’s our responsibility to support our own people. So my role is to foster that culture and do what I can to empower them. What are the challenges of getting kids into STEM? One of the biggest challenges is the false stereotypes associated with people in tech. That you have to be male, or a super nerd, or spend all day hunched over a computer. In my experience, if you expose kids to STEM in primary school, they’re really open-minded about it. They have no fear of failure and will give it a go. But something happens around the age of 12 — I don’t know whether it’s going to high school or the hormones kick in — but kids tend to be a lot more conscious about how people see them and their identity. So false stereotypes can be really challenging to overcome. Related to that is the fact that in mixed schools, it’s generally the boys who take up opportunities to study STEM. So if you’re the only girl in a classroom of boys, that’s tough. For me, it’s about recognising that we don’t have enough women in tech, so we need to support girls at school and encourage their curiosity. In New Zealand, we also don’t have enough Pacifica and Maori representation, as well as people from low socio-economic groups. One of the things I love about Xero is that our leadership recognises we need diversity of thought and an inclusive culture, which is an important step in the right direction. The other challenge is making sure they’re learning tech at school and have a teacher that inspires them. In the next few years, it’s going to be compulsory for New Zealand primary and high schools to teach a digital technology element, which is great. But there’s a need to support these teachers and make sure they’re confident about teaching the latest tech, so they can become a champion for their students. Let’s be honest, we all remember having a teacher who didn’t inspire us. We dropped that subject as soon as we could. So empowering teachers is really important. What do kids need to spark their interest in tech? One of the things I’ve found in my role, is that unless kids have a parent or older sibling who’s in tech, then generally they have no idea about the opportunities available. There’s this mindset that if kids are bright and have potential, then they should become doctors or lawyers or accountants. But tech is disrupting every industry, and it’s not happening in the distant future, it’s happening now. So they really need champions at home, at school and in the community who can encourage them and inspire them to learn more about the opportunities available. A few years ago, I was running tech workshops for kids. This one woman was fine with her son doing all the activities, but for some reason she was hesitant about her daughter learning robotics. She didn’t say why, maybe she thought it looked hard and assumed her daughter couldn’t do it. But it turned out robotics was her daughter’s favourite activity. So it’s not necessarily just about educating students and teachers, it’s also about getting parents on board and reassuring them that it’s a fantastic career choice — they can earn great money and travel around the world while being really creative in areas that make a difference to so many people. What kind of impact can a champion have? It can make a huge difference. For example, I first met Penelope when she was seven. It was at an event in Auckland and she was really curious about what we were doing. So we paired up and did some robotics together. She really enjoyed it and even brought her mum back later on to show her what we’d done. I emailed her mum some activities for Penelope to do at home, but about two months later, she called me saying ‘Help! She will not stop talking about this robotics thing!’ She’d even been to Samoa for a holiday and was talking about it over there. Dr. Michelle Dickerson (Nanogirl), Penelope, and Ruth James From then on, I would catch up with Penelope and her mum on a regular basis and we would do basic coding stuff together. She’s 12 now and I see her once or twice a year. I’m not an engineer, so she overtook me pretty quickly! But it’s amazing how planting that seed and having a champion really made a positive difference. Penelope’s mum is amazing at finding opportunities and encouraging her to participate in things that will improve her coding skills and learn more about tech. No one in their family is an engineer, so it was quite incredible to see. What’s the most rewarding part of your role? I love working with kids who are going to be future engineers and designers and leaders. But the most rewarding thing is witnessing the moment when they truly get it. Their eyes light up and you can see they finally understand. I love that. I used to volunteer at a Code Club and we would be explaining computer science — not on a computer but talking through the logic and how code works. And you would see them have that ‘aha’ moment. Once that happens, you know you’ve got them for life.
https://medium.com/humans-of-xero/when-you-see-kids-have-that-aha-moment-you-know-youve-got-them-for-life-5d21f1c6cdfb
[]
2020-01-30 00:56:56.551000+00:00
['Technology', 'Women In Tech', 'STEM', 'Careers', 'Tech Education']
489
Money 20/20 Korea: Predicting the Future of Fintech
On Tuesday 26th September 2017 Bitcoin Center Korea had an opportunity to participate in an event held by Money 20/20 that was in Korea for the first time ever. Money 20/20 organizes the largest global events enabling payments and financial services innovation for connected commerce at the intersection of mobile, retail, marketing services, data, and technology. In October 2017, Money 20/20 will have an event in Las Vegas where some of the influential speakers from Silicon Valley will be attending to speak and discuss about the hottest topics involving fintech, blockchain, and innovations within the startup scene. Money 20/20 connects people globally (Image credit: Kelly Belter) The people behind Money 20/20 noticed that there were events only for some specific target groups, for instance banking or startups companies only. For this reason, Money 20/20 wanted to connect people from different communities such as technology, financial, and payment services. Money 20/20 noticed the urgent need of these groups to communicate each other for building up a better future for fintech solutions. “ Money 20/20 is that space — where technology meets money, money meets people, people meet ideas and ideas become reality. This is where the entire payments, fintech and financial services industry connects. Everyone is here, every time.” Pat Patel, the Content Director of Money 20/20 Asia & Europe, started the event by bringing up the importance of rapidly changing fintech industry: “Bitcoin has become a successful and good business due to blockchain technology. The importance of proof of concept has started to interest different companies as well. Industries want to know what the value is in these and how organizations can be part of the blockchain and its technology. For example, a few years ago I would have not invested in bitcoin because I thought it had problems to take it to the next level. And now, look at where bitcoin is!” Panel Discussion — The current and future commercial value of fintech: The speakers at Money20/20 Seoul from left to right: Jae Yong Lee, Seung Gun Lee, and Joey Kim Jae Yong Lee, CFO & CSO of JB Financial Group “JB Financial Group is the smallest banking group in Korea but we consider ourselves as a very aggressive banking group in South Korea. We have two commercial banks. We also have an SME management company and some of our locations are in South East Asian countries such as Cambodia, Vietnam and Myanmar. We like to focus on the players who change the fintech world. We established an open banking platform and offer the platform for e-commerce, third parties, etc., and that’s how we want to share the processes and licenses to fintech.” Seung Gun Lee, CEO & Founder of Viva Republica, Toss “At the moment we are at 11 million won and we have 20,000 users everyday. The transaction volume per month is 800 million won. We grow 10–15% every month. The transactions made by our users are around 60 million won even though we have only operated in South Korea. We [have started] those kinds of things like Mint, financial dashboard, credit management systems. We try to achieve much better accessibility to financial products and services, like banks and fintech companies. For that reason we sell savings accounts, financial products and also micro investment features in real estate and all other things that area related to lending, credit data as well as insurance area. The growth has been 95% in 5 months.” Joey Kim, CEO & Founder of PeopleFund “PeopleFund was established in 2014 but we started our service last year in 2016. We connect people in need for money. People who want to invest money to make returns. We do SME loans and personal secured loans, real estate as well. We cover all the fields that need financing. We have lent 11 billion won and we have been making money since then. In Korea, we can see many people doing traditional money borrowing.” Patel: What will have the biggest impact on Financial Services in Korea in 1–2 years? Seung Gun Lee/Viva Republica: The bank is making money from selling their branches and products and they can only get a small profit from the interests. I think P2P lending might be one of those that has the biggest impact in the future. I believe in a good user base. For instance, Alipay made 90 billion USD only in 9 months. When you have a good user base, you can make money. Joey Kim/PeopleFund: I also agree with the idea of building the platform in user base. We definitely need better financial services fast in South Korea. From my perspective, the business that I do, we create value from something that we can’t see in the market. There are so many people who are not willing to lend money to others, for example to the smaller enterprises but we do it with a 10% royalty fee to asset secured loans. We lend money to the people who would not get it from the bank at a good rate. We try to capture the value in the market and try to make it as a product. Customers are willing to invest in the products. They want better service and loans. We can co-operate with the platform. In the future, I think the products are in the main role in the market. For example, JB Financial Group has been an excellent partner for helping us start this company. Jae Yong Lee/JB Financial Group: My approach to this analysis is very traditional. It’s obvious that the revenue usually comes from the fees but fintech wants to challenge the traditional way and get revenue from interests or other sources of revenues. However, if we only focus on these things the whole system is based on how many customers you are able to get. If you are willing to expand to the fintech area, you will receive a lot more additional income. I strongly believe that fintech has all the possibilities in this. It’s impossible to predict the future and know all your enemies. Patel: What do you think about the regulations in South Korea? Jae Yong Lee/JB Financial Group: The regulations in South Korea are not exceptional. The biggest problems comes from the privacy issues that include personal data. We should agree about the right instructions and tools for protecting the privacy matters. For example, China has no regulations for these and other businesses can be really successful there. The other problem is that the current banks that are not happy with dealing the regulations because they need to protect the market and the operations. I believe we need better social agreements between the customers and the bank; the government and fintech. We need to think of what is good for customers. We need to develop processes for helping get the ideas more functional. The most important thing is to remember the customers. Data will be open and shared, it’s just the matter of time when this happen Seung Gun Lee/Viva Republica: They are getting pressure from regulations in China. Regulators are already aware of the fact that fintech is something you need to make better. I had an opportunity to be a chairman in the Korean fintech Association and we discuss about significant matters, bold and crazy things. We are aiming to be approved by the congress in June next year in 2018. At the moment, financial services are dominated by the banks and they dominate the financial infrastructure and also the financial services.Some of the ideas we have been talking about are such as that the third parties could get access for any financial information from any place in financial institutions. The law is enforcing the banks and any other companies to share their data with fintech companies if the third party has the user consent. This model would change a lot of things if this came to South Korea. In this case the financial service area and the financial infrastructure would be separated if this was implemented in South Korea. There would be so many different corporations that offer financial services in South Korea. Just like in US and Europe there would be huge mega companies around financial services. I’m really looking forward to the changes in regulations. Joey Kim/PeopleFund: It took 18 months for PeopleFund to go through all these current regulations in South Korea to just be able to borrow money from the banking process. We had to change our business model and the structure of the loans according to the requirements. This tells a lot about how regulations work in South Korea. We are trying to make the change on a really mainstream level. Panel Discussion - The current and future commercial value of FinTech in South Korea Patel: We asked the same question “What will have the biggest impact on financial services in Korea in 1–2 years?” from the audience here, and for the results we got blockchain as number 1. Are these results something that you were expecting? Joey Kim/PeopleFund: blockchain and cryptocurrency are growing in South Korea and I think it has the most growth in the entire world at the moment. The Korean financial industry has always had an issue with the personal verification. But now it’s changing a lot and I believe blockchain might be a game changer here. Many payment systems don’t work well on applications, but blockchain can make the difference in improving the verification issues. For that reason, [I believe] the audience set it as number 1. Seung Gun Lee/Viva Republica: If something is impacting on something, it tells about the distribution. It should be well distributed. In terms of that, I think blockhain, open APIs or artificial intelligence are in a pre-commercial phase in distribution. Blockchain is not yet really well distributed in the market and for that reason I’m afraid it will not change the financial services that much in the next 1–2 years. However, when it’s well distributed then it will change everything forever. Jae Yong Lee/JB Financial Group: The new processes and new values are in the main role. For me, blockchain stands for freedom of the current operations. We need to know what blockchain and cryptocurrency really do. We just can’t think of technology itself. Patel: We have a question for Seung Gun Lee. How did you build your trust and credibility from users? Seung Gun Lee/Viva Republica: We tried so hard but we always failed. Building trust is just not possible in an early stage. Our strategy was to think through the convenience and user experience. This way we could empathize it with the value, even though it’s not so a sustainable service, but when the amount of users grow, it will show the trust to the other people. Convenience and willingness to go further are in the main roles. You should just focus on the value where you are way better than anyone else. Korbit: Tony Lyu, CEO & Founder of Korbit, was one of the speakers at the Money20/20 event. Korbit is Korea’s first digital asset exchange that was founded in 2013. It is currently handling 11 digital assets and they have over 40,000 active traders and over $10B cumulative trading volume. Tony Lyu, CEO & Founder at Korbit “Korbit’s mission is to enable the free flow of value by utilizing new technologies such as bitcoin and the blockchain. Starting with financial services, we are creating a world where individuals can transact with each other freely, without sacrificing security or convenience.” Tony Lyu speaking at Money20/20 event He talked about how global wealth is moving into blockhain-based digital assets. On his speech he referred to the surveys and statistics made by Anonymous NEXT that is analyzing ICOs and the token market. We have posted some of the pictures from his presentation down below. Tony Lyu started his speech by mentioning that it has been estimated that 10% of the global wealth will be in blockchain within 10 years and it’s clear to see this trend already happening. Entrepreneurs are receiving more money through ICOs more than they are receiving money from capital firms. This is a big trend on this year. South Korea has become a major market in cryptocurrency trading by having the biggest amount of cryptocurrency trading fiat market share for exchanges with fees being at 37%, leaving China behind at 31.4%. South Korea is a leading country in using blockchain. Until 2017, all the token launches taken together were less than 1% of global crowdfunding activity, suggesting meaningful opportunity (Source: Autonomous NEXT). The difference between investing in stocks and tokens is that if you invest stocks, you invest in organizations and companies. When investing in tokens, you don’t invest in a company. The company is usually a non-profit organization and for example, if there’s an application you actually get the currency of the application or some other type of utility of the application. You are not buying the application, you are buying the currency used by the application. South Korea has showed the biggest growth in the past year in a market. The difference between investing in stocks vs Tokens (Source: Autonomous NEXT). “Usually South Korea doesn’t lead the world in the financial services but in the blockchain space, South Korea is the global leader. As a country we can think of the ways how to become competitive in the future.” ***At the time of this article, ICOs had not been banned in South Korea just yet. With recent news coming from the financial and regulatory commissions, it’s hard to say right now how much it has affected the cryptocurrency markets in Korea***
https://medium.com/bitcoin-center-korea/money-20-20-korea-predicting-the-future-of-fintech-ed859d525659
['Tuulia Salo']
2017-10-12 06:26:16.543000+00:00
['Fintech', 'Blockchain', 'Finance', 'Technology', 'Banking']
490
From Vision to Version (Part 2)
In the first post from this series I defined terms such as “vision,” “strategy,” and “tactics,” then explored the benefits of having a well-defined and communicated product vision. I also provided some tips for how one might go about gathering inspiration and then articulating a vision of their own. In this post, I will illustrate how to use some of these tips by demonstrating how they played into the creation of the current product vision at Getsafe. Initial Explorations and Thoughts I started working for Getsafe in October 2018 as a newcomer to the insurance industry. Needless to say, I had a lot to learn about insurance as a domain as well as about Getsafe in general. Thus, I spent the first month or two on the job trying to gain as much context as possible in order to formulate some opinions of my own. Here’s a summary of my learnings from these explorations. The customer lifecycle is super, super long. The timing of insurance purchases generally correlate with the occurrence of major life events, which means that on average people will only need to buy a new insurance product every few years. This presents some pretty interesting challenges for customer engagement as the long timeline between purchases means that we will need to be very creative about how to stay relevant and top-of-mind. We also need to make sure that our products and services can evolve with the lives of our customers. Insurance was meant to be personalized. A very interesting aspect of today’s insurance is that it is possible to lose money by selling more product. This is because today, insurance as a business relies on making sure that the amount of money collected from customers exceeds the amount of money paid out in claims in aggregate over time. The word “aggregate” is key here because at the moment the industry does not yet have the means to make sure this equation always holds at an individual level, meaning that companies simply make money on the “low risk” customers and lose money on the “high risk” customers. Insurance can be a part of every lifestyle. Many companies supplement revenues from their core business with commission from selling insurances. For example, retail shops often sell insurance for the goods that people buy at the store. Banks often cross-sell homeowners insurance policies when customers are applying for a home loan. From the perspective of an insurance company, this means that there is likely an opportunity to vertically integrate and position insurance products as a part of a lifestyle instead of purely standalone. How people buy insurance can become more natural. At Getsafe, every new employee spends a part of their first week mapping out the customer acquisition journey from initial discovery to completing their first purchase. When I went through this exercise, the customer acquisition journey looked something like this: Customer realizes they need insurance. Customer explores options via various tools. Customer gets quotes from some of these options. Customer selects one option. Customer purchases insurance. What stood out to me here was that the first step of the journey required customers to somehow realize they need insurance. This feels unnatural because insurances do not occur to me as something that people generally wake up each morning and just decide they need. Insurances do not directly address any fundamental human needs in the way that food fulfills hunger or friends create a feeling of belonging. To me, it feels like the customer acquisition journey ought to have a “step #0” that starts somewhere before the needs of insurances are fully realized by the average consumer. Insurance has a noble origin. As I learned more about Getsafe and the insurance industry, I started asking myself a very fundamental question: Why does insurance deserve to exist? So I started researching the origins of insurance. To my pleasant surprise, insurances have a relatively noble beginning, serving as the instrument by which any given community can empower its members to recover from disasters. Unfortunately, this narrative has gotten lost because today we generally view insurance companies as sleazy, sales-driven businesses that profit from the fear within individuals. The sense of communal benefit and protection is nowhere to be found in the average person’s perception of why insurance exists. This represents a very large gap between that initial starting place and where things are today, and I think our mission to reinvent insurance should also include helping people understand how it fits into their lives and why it is good for them and their community. Turning Inspiration Into Concrete Statements To add up these learnings, here are three statements that start to concretely articulate how the inspiration from above could inform our product vision. Imagine a world where… …Getsafe provides products and services that directly address human needs. There should be a reason for people to wake up in the morning and want to use one of our products or services. There should be a reason for people to wake up in the morning and want to use one of our products or services. …Getsafe engages with people before they realize they need insurance. We want to be a part of the journey to help them understand how insurances may fit into their daily lives. We want to be a part of the journey to help them understand how insurances may fit into their daily lives. …Insurance feels more like a companion rather than a pile of paperwork. Getsafe should bring insurance back to its roots and re-create a sense of community around it. The Product Vision at Getsafe With these concrete statements, we can start to tell a story about the world that we would like to create. Here is a high-level pitch for what we are trying to achieve at Getsafe. Bridging Insurance With Human Needs “Peace of mind” is a basic human need, and here are some ways that the average person might articulate this fundamental desire: I need to… …plan for the future. …have a backup plan. …stop worrying. …feel safe. …be ready for the “what-ifs.” …know my family will be OK. As an insurance company, providing the appropriate coverage to our customers is one way that we can try to address “peace of mind” for them. Unfortunately, insurance is really complicated, and most customers need help understanding what they need, when, and why. Traditionally, insurance agents have tried to bridge this gap by setting up long appointments to interview the customer about their needs. For us as an insurtech, how can we use technology to do this better? How can we seamlessly bridge “peace of mind” with insurance products such that it feels completely natural to our customers? The Insurance of Tomorrow Technology has become ubiquitously embedded within the daily lives of people. In today’s on-demand economy, consumers gravitate toward real-time access and instant gratification. This trend provides the optimal environment for next-generation insurance products to incubate because it affords us ample opportunity to inject ourselves into the everyday lives of people. With a mobile-first approach, our app lives inside the pockets of our customers and travels with them wherever they go. As long as we are providing tangible benefits to our customers, we have the opportunity to position insurance as a life companion, rather than a necessary evil. We foresee the evolution of the “insurance experience” in two phases: 1. Insurance as an App Over the last two decades, technology has dramatically changed how people interact with many products and services. This same movement toward digital and on-demand is now finally gaining traction within the insurance industry. For Getsafe and our insurtech peers, this means that we have the opportunity to define what the “insurance experience” ought to feel like in this new world. As an example, customers can now purchase and cancel insurance policies in real-time, without scheduling an appointment or filling out a long contract. We will build technology to transform interactions that have traditionally been complex into one that is frictionless, fast, and fair (i.e., claims). 2. Insurance as a Lifestyle Since insurances are complicated and usually irrelevant to daily life, we believe that insurtechs will aim to achieve far more than the digitization of insurance products. We believe that in order for the industry to truly progress, insurance products must become more ubiquitous in the everyday lives of consumers. It should be clear to our customers how we enable them to live the lives they’ve always wanted to live. They should not perceive insurance products as something that they need to buy but hope never to use. Three Customer Groups Given the above premise, we’ve articulated three distinct customer groups to keep in mind as we develop products and services: Customers of insurance products: i.e., “I need bike theft insurance.” i.e., “I need bike theft insurance.” Getsafe employees: i.e., “I work for Getsafe.” i.e., “I work for Getsafe.” Average consumers: i.e., “I own a bike.” Each of these customer groups have drastically different goals, so naturally we now also have three major product areas that we work on in parallel: Insurance products : Create the next-generation insurance experience : Create the next-generation insurance experience Internal tools: Maximize operational efficiency Maximize operational efficiency Lifestyle scenarios: Bridge the gap between human needs and insurances Here is an illustration that I often draw on the whiteboard when pitching our vision to various internal and external stakeholders. It is simple, yet it accurately describes how I see the relationship between each of the three product areas. Getsafe will reinvent insurance by creating a new insurance experience that caters to the digital, on-demand needs of customers. We will scale our operations by developing internal tools. Ultimately, we will also create products and services that bridge human needs to insurance products. Conclusion If you’ve gotten this far, thank you for reading! I sincerely hope you’ve found both of these articles useful and that you’ve been able to find some tips to apply to your daily work. Feel free to drop any questions or comments below, and please follow us if you’d like to keep tabs on what we’re up to. Don’t be a stranger, Patrick
https://pattsao.com/from-vision-to-version-part-2-eee4f4b292e6
['Patrick Tsao']
2019-05-06 19:23:17.393000+00:00
['Technology', 'Product Management', 'Careers', 'Vision', 'Insurtech']
491
“It’s a good school, of course, but…”
Speedchange.at.medium “They’re good schools, I know, it’s just…” and the speaker trailed off. And I’ve heard that so often. The problem is that when we say, “it’s a good school,” 95% of the time we mean — allow me to be blunt — ”it’s a school filled with white upper middle class or better kids.” That’s good, right? It means the kids come to school trained to be ready for mass instruction. It means homework gets done with parent help (or more). It means the pain and “distractions” of poverty don’t interfere with the school day. It means there’s plenty of money for extras like field trips. Most of all it means the kids will do fine on standardized tests, and will get into universities that will increase the status of their parents. But… this time when I heard this I thought of Stephanie Passman — one of the brilliant teachers I am lucky enough to work with. Stephanie and I were up in Washington a couple of weeks ago, working with deans and professors of education from universities across America on the question of, “How do we prepare pre-service teachers for the schools we need?” ^ You can find our presentation here It was a great day of conversations, but at one point one professor said, “but I have colleagues who are doing things the old way, and they are great teachers, they are doing a great job.” And Stephanie looked at him and said, “If you think they’re doing a great job you are measuring the wrong things.” And that becomes my response now. “What are you measuring?” After all, what Stephanie meant was an absolute: while an ‘old school’ professor might be ok in British Lit and serviceable in History, he or she is an embarrassing failure in a school/college of education where the modeling must be about the future. And if your school is “good” because it doesn’t undo the born-in advantages of its students, it is not “good” at all, but simply a fairly efficient day care operation. Allow me to step back for a moment — I said above that we meant rich white schools 95% of the time, but let’s look at the other 5%. In those cases we mean ‘compliance academies’ — African-American and Latino kids marching in straight lines wearing uniforms and being routinely humiliated for any violation of whiteness expectations. Whether KIPP or Success Academies or the all-minority school in your neighborhood, these are the contemporary equivalent of British colonial schools or American Indian Schools. They are “good” because they are more under control than the terrible public schools most big cities offer their poor, and because whites imagine that Black boys taught to march in step will be less of a threat on the street. And if that is “good” we are very much the miserable racists we seem to be. So, if our definition of “good schools” is painfully illusory, what might we measure to find “good”? A Few Metrics Is choice expected? More than a few times visitors to our Albemarle County high schools ask, “so kids are allowed to eat anywhere?” To which I tend to always reply, “of course.” When that conversation extends the objections people bring up tend to sound — to me — as if they think their school is filled with especially sloppy animals. Which is weird, except that kids will always drop to the level of your expectations. I toured a new high school in Washington, DC once where the teachers had pulled all the new comfortable furniture from student spaces and hidden it in faculty offices — ”the kids,” an Assistant Principal assured us, “don’t know how to use this furniture.” “They don’t know how to use chairs and couches?” I asked. He ignored my question. To me choice equals trust. And I have never seen a school where kids were really learning anything that didn’t involve a hell of a lot of mutual trust between kids and adults. I ‘measure’ a few things. How many kids are in the hallways during class time? is one. Kids in the hallways means that kids are trusted — are trusted to be on their own, are trusted to go where they need to go — whether that’s the library, our mechatronics labs, or wherever. Are kids in classrooms sitting or standing in lots of different ways? Really, very few kids are comfortable in classroom chairs, and when kids are uncomfortable they’re focused only on discomfort. We made a rule a few years ago that we’d never buy less than three kinds of seating and worksurfaces for any learning space, but even where we’ve managed to refurnish, kids need to learn to create their best environment. If you don’t let kids choose how, where, or if to sit you are failing to help them prepare for life, and it is not “good.” Choice in technologies? Are kids using phones? Do they get to really control the one-to-one devices you give them? (Download, add software, change the interface) Are kids in a class using different software or web tools/sites to work? “These are personal learning devices,” my boss Vince Scheivert is fond of saying, “if they can’t personalize them, they aren’t that.” It is essential — in this century — that kids learn to use the tools of their lifespans, and your kids are going to live their lives in the mid to late 21st Century. Whine all you want about the good ol’ days on your own time, but if you are working in education you must be modeling, you must be helping children learn to live well by making good technological choices. They cannot do that, your school is not “good,” if you ban, lock down, and tightly direct technology use. A library with a lot of noise, collaboration, tools. About seven years ago I heard it said that, “in this century a library had to become the community kitchen, and stop being a supermarket.” Around that time I stumbled into that Fifth Avenue, New York Apple Store at 2.00 am and came to the conclusion that this century’s libraries needed to be community centers for contagious creativity. The point being that the world needed very few libraries with massive collections, after all Google was already scanning the entirety of the University of Michigan, Harvard, and New York Public Library, creating the incredible database of the world’s words. For ten years already schools I worked in had been using Fordham University’s Ancient History Sourcebook to connect kids to history. We needed our libraries to have tools kids wouldn’t find at home or in classrooms. We needed space for kids to gather, to process information together, to make things, to create. Our libraries have tools, 3D printers, music construction studios, zones for writing, microcomputers, one has a laser cutter. They are the active academic core of our schools, crowded and noisy and full of invention — though yes, we create quieter zones as well, and rooms for teams to work in. If your library is not a place that works for today, your school has a problem. The Corridor Climate supports all kids. I often tell the story of working in two neighboring high schools. One was small (about 300 kids), and was always near the top of the state in test results. The other was large (2,700), far more diverse, with many more “challenges.” The small school was a brutal place, with constant bullying by kids and teachers. The big school felt very safe for most everyone. The big one created safety many ways, but it began in the corridors which were carpeted for sound control, had very wide stairways and doorways to eliminate passing time crush points, and had teachers standing outside every classroom doorway during passing times, constantly interacting with kids. The small school had none of that. The big school also gave kids 10 minutes between classes, a long enough time that the typical pressure we see disappeared. Time, sound control, relationships in action. Those are things you can do. Here’s a test, if SpEd kids sometimes need to leave classes early in order to change classes safely, your school is, by definition, unsafe. And kids made unsafe in your corridors will do badly. In life if not academically. Remember, ending bullying is all about adult behaviors. Kids imitate adults. Kids are taking risks, teachers are taking risks. Risk is how we grow. Risk is the essential modus operandi of both childhood and adolescence. If I walk a school and I don’t see kids taking risks, in class, in the library, on the playground, in the halls, I know we have a school that is fighting that essential mode. And that can’t be a good school. But kids won’t take risks unless teachers and administrators are daily risk takers. The teacher who repeats last year’s lesson almost exactly is a problem that needs to be addressed. The teacher whose classroom always looks the same is a problem. And the administrators who don’t encourage and demonstrate risk taking are a problem. When schools squelch risktaking they stop being educational institutions and become, simply, institutions. Now, go back over the past year. How does your school do when you change your measuring sticks? “Good” cannot be about either socioeconomics or compliance. “Good” has to be about kids growing, about every kid being ok, about every kid learning the tools and environments they will live in. In the end, you know. A really good school doesn’t end with a “but…” or a “just…” or a wish that kids would all come even if they didn’t have to.
https://medium.com/synapse/its-a-good-school-of-course-but-21a19c7b95aa
['Ira David Socol']
2016-07-06 15:37:43.900000+00:00
['Teaching', 'Iste2016', 'Learning Spaces', 'Education', 'Technology']
492
How Amazon Plans to Take Down SpaceX
Space Is the Only Way to Go On several occasions Bezos has expressed the view that mankind must inevitably move into the heavens. His vision — outlined in a speech in 2019 — paints a future of mass industrialisation of space. Instead of polluting and destroying our fragile world, he wants factories placed in giant orbiting hubs. Bezos imagines humanity will leave the Earth as well. But unlike others, who picture colonies on Mars or on the Moon, he believes we will construct gigantic habitats in space. These structures, first imagined by physicist Gerard O’Neill, could be perfectly adapted to human life. Unlike planets, which by nature are limited in size, an almost endless supply of habitats could be built. In this vision of the future, the pressure we currently place on the Earth would slowly be lifted. As manufacturing moved off-world, pollution would fall. As people migrate to new habitats in the heavens, wildlife could reclaim our planet. Eventually the Earth would become a massive park, and humanity would become truly space-borne. What Bezos imagines is vast in scale, a revolution comparable to the dawn of agriculture thousands of years ago. Even with huge wealth and the vast resources of Amazon, Bezos will not be able to do it alone. His aim, at least for now, is to the take the first few steps down the road towards that future. The launch of New Shepard in 2015 was the first, small, step along the road. Now he has bigger steps planned. Take Blue Origin first. Since 2015 the company has continued work on New Shepard. They have now made a dozen sub-orbital flights, and hope to soon demonstrate that the capsule can carry paying passengers to the edge of space. The real prize, of course, is building an orbital rocket — and with New Glenn, a new design, Blue Origin believe they have that. Building an orbital rocket is hard. Only one private company — SpaceX — has ever managed to do so. Despite at least eight years of development work Blue Origin still have not flown New Glenn, though they claim to be close to a launch, perhaps as soon as 2021. Like SpaceX, Blue Origin hope to one day carry astronauts onboard, taking them to orbit, and even beyond. To that end Blue Origin have announced another secretive project: New Armstrong. Though little is known about the project, it would appear to be a lunar rocket of some kind. That guess is backed up by Bezos’ publicly stated ambitions to reach the Moon. In 2019 he unveiled a planned lunar lander named Blue Moon, now under consideration by NASA for use in any American return to the Moon. Blue Moon: a vision of how Jeff Bezos may one day send deliveries, and astronauts, to the Moon. Image by Blue Origin The Second Path to Space: Amazon Blue Origin is a separate company to Amazon. It sells no products, and makes no profit. The company is funded privately by Jeff Bezos, reportedly costing him more than a billion dollars every year. And though it has had some success in reaching space, it has not achieved as much as Elon Musk has with SpaceX. Through Amazon, though, Bezos has a second lever to push his goals in space. And though the company has so far revealed little about its intentions, it has made several interesting announcements. Most notably the company has announced plans, named Project Kuiper, to launch a constellation of satellites. Officially the aim is to expand Internet access, especially among the poor. But this is probably not the only reason Amazon is suddenly interested in the heavens. The business case for large satellite constellations remains unproven. Previous attempts at building such constellations — by Iridium, Teledesic and Globalstar — failed miserably when faced with the high costs of putting thousands of satellites into orbit. Several recent attempts have stumbled at the same hurdle. OneWeb recently suffered bankruptcy, and others like LeoSat have long since faded away. Jeff Bezos does not seem to be afraid of the huge capital investment, or even of running a satellite internet service at a loss. If instead the constellation serves as a means to drive data and customers towards Amazon’s computing platform, it may be worth the costs. If it can serve his other ambitions in space, and perhaps provide Blue Origin with a dedicated customer, even better. Both Amazon and Microsoft are already building ground stations near their data centres. They aim to transfer data from orbit to their computing clouds as fast as possible. It is clear both companies think large amounts of data will soon be flowing through satellite networks, and they each want to capture as much of this market as they can. Much like computer infrastructure, satellites don’t directly earn revenue for their owners. Their value lies in the data they collect or transfer, and especially in processing that data. One clue that Amazon is thinking in this direction comes from Earth, a set of tools provided through Amazon Web Services that handles processing of Earth observation data. Satellites can capture vast amounts of data. The hard part is getting it back to Earth. Photo by NASA on Unsplash To Know the Future, Watch the Data The idea is simple. Amazon, or rivals, can provide powerful data processing tools through their cloud platforms. Satellite networks, together with strategically located ground stations, help them rapidly collect and transfer data into the cloud. Stop thinking about satellite broadband as a way of connecting the world, then, and start thinking of it as a massive new way of gathering data. More speculatively, Amazon could even treat its satellites as a service. Instead of needing to launch a constellation of satellites to collect datasets, could you just rent sensor time on an Amazon satellite? If Amazon is regularly launching satellites, could you just pay them a nominal fee to carry your device to orbit? If this succeeds, if the cost of getting an instrument into orbit falls to almost nothing, then we may see huge volumes of data following back to Earth. Everything from weather monitoring to movements of ships, aircraft and cargo containers could potentially be watched, recorded and processed through Amazon offerings. This is, so far, just speculation. Amazon has revealed very little publicly about its plans for satellite constellations, though senior figures at Amazon hint they are thinking in this direction. Regardless, as more and more satellites go up, and constellations grow ever larger, it’s hard not to imagine something similar happening. The elephant in the room is, of course, SpaceX. By all appearances they are far ahead of Amazon and Jeff Bezos. SpaceX have already launched astronauts to the International Space Station, and placed hundreds of satellites into orbit. Amazon, by contrast, seem to have hardly moved off the drawing board. Can they really compete? Amazon has two big advantages over SpaceX. They have more money, with annual revenues that dwarf anything SpaceX can claim. They also have more existing infrastructure. If Amazon can make space all about data, their existing platforms will make them the clear winner, even if they are slow to get started. Jeff Bezos though, is undoubtedly thinking bigger. New Shepard, Blue Origin, Blue Moon, Project Kuiper. All are just the first steps along a path towards revolutionising our way of life. That revolution may not happen for decades, or even centuries. But Bezos is determined to give humanity the push it needs to get moving.
https://medium.com/discourse/the-new-space-race-is-all-about-data-5e1b757e04a7
['Alastair Isaacs']
2020-12-18 03:08:45.096000+00:00
['Space', 'Technology', 'Amazon', 'Data Science', 'Future']
493
Where we are Now
Where we are Now I’m not happy with the way that the internet has made me, or should I really say that I have just noticed how the internet has made me. Photo by Anshu A on Unsplash I was just browsing a health site because I’m interested in that and I was dealing with a WhatsApp message at the same time, so I was back and forth for a few minutes, and when I returned back to the website I had forgotten what I was looking at, and I was drawn to a search bar that asked what I was looking for? you know where you enter a subject, etc., and press the search icon? How easy has it become to click into that empty space and wait for the drop-down menu to appear? How easy have things become where we don’t have to think about what we’re doing, we just click? The internet is clever! And we’d be lost without it too! But how many of you have actually realised that it’s changing the way that we think? I was actually looking for the drop-down as if it was second nature, and when there wasn’t a drop-down I got stuck. Hence being here to tell you lot. What’s made me feel so deliciously happy though is that it was second nature for me to come here in the first place and to start writing. We don’t need drop-downs! It’s hard to remember these days when we used to have to send everything via the post. Who remembers pen pals where some of us would write to someone in another town/city/county? The very start of wider communication and look at us now. We’re worldwide, and I just love it! Hello to you where ever you are and I get so much pleasure at being able to say that knowing that in an instant after I’ve published you’ll be able to read me. If the world is not such a bigger place as we once thought, it’s definitely a lot smaller now that we can touch each other so readily with words. Maybe it’s just me? My computer is my life. I look, every day through this screen into a world where everything is at my fingertips. I have a mind that quickly flits from one thing to another and my pc screen with all of its easily obtainable windows can keep up. I’m in heaven. If only I had someone to talk to I really don’t think I’d know what to say. I’ve started listening to 2 females talking over a podcast called REDHANDED where they talk about true crime. I accidentally came across it while working away on a website that I’m building and saved it to listen to a bit later on that evening. What’s appealing to me is hearing their lovely London accented voices and their beautiful sweet intelligent minds. If it wasn’t for that I don’t think I’d bother listening as what they talk about are the most horrific true crimes that I’ve ever heard. SHOCKING! But put into the context of what they are doing and where they’re coming from it makes really good listening. They have a show called ‘Under The Duvet’ too, I’ve not listened yet but I think a subscription is on the cards, for me at least. My every day is full of me. I rarely see or hear anyone apart from my own thoughts and occasionally when I talk to myself. I’m not complaining! I like my own company and this internet thing on my computer keeps me more than occupied. Ah! I kind of dread the day when I’m not here as much and I have to go to work. I’ve started to write on paper, with a real pen. I actually surprised myself earlier on. I lay on my bed listening to an audiobook called Zero Negativity and as I always do when I listen to good audiobooks, I make notes. What I’d usually do is make a bookmark in the app but while listening to it through my television I couldn’t add a clip so I rushed to the living room to grab a pen and some paper. Hey, it’s a free world. And sometimes it’s refreshing to ‘not rely on technology’. We can’t manage without it tho, can we? Not now we can’t, it’s too ingrained in us and society and we need it, it’s a way of life. I feel sad for all of those people who have an idea of it and can’t behold it as we do. Maybe feeling sad isn’t the right word. I wonder what it must be like to hear about and see it and want it but can’t get it is what I mean. Is that sad? Where we are now is at a tipping point in my view. What with the pandemic and the climate problems that we have, and that the world has never been so closely knit together because we all need to make things right and stop the changes that are happening. Technology just brings us closer together and more and more are finding it. Just look at Medium now. There has never been so many writing about so much in one place, and when I think of all the other places that there are to write and share on the internet, with all of the drop-down menus and technology-led ways of doing things it makes me wonder who else has been trapped by that dupe and that one day this all will be full up too and where then do we go?
https://medium.com/an-idea/where-we-are-now-f4b0cadfa822
['Robert Walker Aka Num']
2020-12-13 03:02:47.594000+00:00
['Internet', 'Writing', 'Life', 'Learning', 'Technology']
494
The Unbank of U
Bill Gates, who had the most money in the world, once said: “Banking is essential, banks are not.” With your cash no longer piled up in a bank vault, why would you need a bank to let you access, spend and manage your money? After all, when did you last visit a bank branch? Money is digital now. It’s all spreadsheets and data, digital reconciliations and transfers. And in this digital world, banks are analogue beasts struggling to adapt from paper to pixels. So Bill’s idea was a great idea even back in 1994. But no one listened. Until we did. We put you at the heart of a solution by creating U. It’s a personal account for the digital world. It runs from your phone, and provides the money and contactless debit card services you would expect. In this respect it is a better, digital, version of what you have always had from a bank. Except, it’s not a bank. Why “Unbank” with Us. Here’s why. Banks make money by lending out your money to others at higher interest rates than they pay you. Just check today’s saving interest rates. You’ll be lucky to make 2%! However, your overdraft interest will be 18% or more. But increasingly since the crash of 2008, they resorted to making money from selling other services to their customers (such as PPI, mortgages, credit cards and so on). It created some really bad behaviour and loss of trust. Fines of £billions and £billions have been paid. Many think there’s more to come. The banks’ other ‘big idea’ was penalty fees on current accounts; hidden charges and complex pricing models that are tough for normal people to understand. They invented the ‘Free-if-in-Credit’ accounts, which most people today now use. These accounts hope you go into debt, either planned, (or worse still, by mistake), so they can actually charge for something they said was free. A £5 charge here. A £25 charge there. These mount up. Stealthily. And the banks love them. Each year 16m British people have unexpected overdrafts which cost them fees and penalties, ranging from £5–100 per month. 24m people have overdrafts, which often they exceed by mistake and so pay extra fees too. According to the Treasury, the banks make £1.25bn each year from your mistakes. These charges don’t affect the wealthier people that banks love, but for the most normal people for whom money is tight, this is unfair. In fact, it’s a scandal. A huge business model, created by banks, to prey on less wealthy customers’ mistakes. Wow. No wonder trust in banks is so low. Introducing you to U At U, we have a different approach. Because we don’t lend out your money, our business is just about providing an account with clearly and fairly priced services, where you pay for what you use, not for what others use. The U Account lets you budget, spend and pay your bills more easily, so that hopefully you can put money aside for a rainy day and avoid unnecessary borrowing. It also means no penalty fees and unexpected charges. (Many of us need to borrow too; we aren’t anti-lending, just opposed to how the banks subsidise free service for those who don’t need to borrow, from penalties and fees levied from those who have to). So that’s the origin of what we call our “Unbanking” philosophy. It means we have a positive reason to provide a positive service that keeps your money in the positive. It’s called the U Account because it lets YOU personalise it closely to your own specific requirements, to have a view of your money, to ring-fence money for bills and pay them from Extra Accounts and so on. It is modern, helpful to your situation, and pretty cool too. All of your funds are held with a major UK bank in a single business account under the regulatory control of another bank, not by us. So your money is as safe as can be. Let’s be fair. Love them or hate them, the banks have scale and strong balance sheets and Business needs them. It’s just that they suck at dealing with people. So you may well be better off Unbanking and opening a U Account. It will let you personalise your budgeting, avoid borrowing and start to pay off credit cards and overdrafts once and for all. And it will help you spend and manage your money to suit your needs, not theirs. So it’s not “The Bank of XYZ”, but “The Unbank of U”. After all, it’s for YOU, not them.
https://medium.com/u-account/the-unbank-of-u-5feb6e5ba95c
['U Account']
2018-03-27 13:51:48.625000+00:00
['Banking', 'Budgeting', 'Technology', 'Finance', 'Fintech']
495
Dominate Your Niche, Podcast Episode with @iamDrWill
I had another chance to be on the Dr. Will Show Podcast! Will Deyamport, III, Ed.D & I dig into my journey to becoming a teacher → edtech integrationist, coach, speaker, author… the power of developing relationships & modeling… helping teachers in the current climate… & more! Here are some timestamps to highlights from the episode:
https://medium.com/@staceyroshan/dominate-your-niche-podcast-episode-with-iamdrwill-567b1e2e98ba
['Stacey Roshan']
2020-12-22 22:31:58.669000+00:00
['Technology', 'Edtech', 'Entrepreneurship', 'Education']
496
JavaScript Best Practices- Node.js Apps
Photo by David Clode on Unsplash JavaScript is a very forgiving language. It’s easy to write code that runs but has mistakes in it. In this article, we’ll look at some mistakes that are easy to make in Node apps, including joining paths the wrong way, referencing process.env and more. No String Concatenation with __dirname and __filename In Node.js, __dirname and __filename are strings that have the current directory or current script file name respectively. Many people try to concatenate them like the following to get the full path of a file: const pathname = __dirname + '/bar.js' However, the code above is probably since Node apps can be run on different platforms, which have different styles for paths. For instance, Windows uses backslash a path segment separation, while Linux and macOS use forward slash as separators. Therefore, concatenation isn’t a reliable way to join paths together. It’s very easy to create an invalid path by concatenation segments together. Instead, we should use the path.join method to combine segments together as follows: const path = require('path'); const pathname = path.join(__dirname, 'bar.js'); Then we won’t have to worry about the path structure of the path causing issues when running in different operating systems. The path.resolve will get the fully qualified path with the full path separators. For instance, we can write: const path = require('path'); const pathname = path.resolve(__dirname, 'bar.js'); and we get the full path generated by Node.js. No Reference to process.env The process.env object is used to store deployment or configuration settings. Putting it everywhere is a maintenance issue because we’re referencing this global dependency everywhere. It can, therefore, lead to merge conflicts and deployment issues in multiserver setups. The best practice for storing configuration settings is to store them all in one file and then reference that file instead. For instance, instead of writing: const dbPath = process.env.DB_PATH; We should write: const config = require("./config"); const dbPath = config.DB_PATH; No process.exit() process.exit() is use to immediately stop a Node process and exit the program. This isn’t good because there may be lots of things running in the background that are needed by the program. It can stop a progran complete when an error occurs. For instance, if we have something like the following: if (error) { process.exit(1); } Then the program exists immediatelt and there’s no chance for our program to respond to the error is error is truthy. The better way to raise errors when they occur is to throw an exception. For instance, we can do that as follows: if (error) { throw new Error("error"); } This way, our program can catch the error and then respond to it. We’ll also see the full stack from the Error object from what’s been called before the error is thrown. Photo by Emily Campbell on Unsplash Don’t Use Synchronous Methods in Big Apps Since Node is single-threade, running synchronous methods means that they have to finish before anything else can run. If it’s a long running process, then this is definitely a problem. We don’t want one method to hold up our entire app. For instance, if we’re running a web server app, then we definitely shouldn’t have synchronous methods holding up the web server with one request. However, for scripts that run line-by-line and don’t have multiple parts, synchronous methods are useful for simplifying the code and doing what we want sequentially without much thinking. For instance, in a big production app, we shouldn’t be running methods like fs.readFileSync as follows: const fs = require('fs'); const file = fs.readFileSync('foo.txt').toString(); We can instead use methods like readFile rom fs.promise to read a file: const fs = require('fs').promises; (async () => { const file = await fs.readFile('foo.txt') const content = file.toString(); console.log(content); })() This way, we run asynchronous code in a sequence as we do with synchronous code without holding up the rest of our program. Conclusion If we want to create strings for file paths, we should use the path.join or path.resolve methods to create full paths. This way, we won’t have to worry the path structure difference in different OSs. The best way to store config settings is to put them all in a configuration and then reference everything from that file. process.exit shouldn’t be used to exit the program when we encounter errors because it end the program abruptly, giving our program no chance of responding. Finally, other than small scripts, we shouldn’t be using synchronous methods in our Node apps.
https://medium.com/swlh/javascript-best-practices-node-js-apps-2f4dfc788fca
['John Au-Yeung']
2020-05-31 05:16:24.003000+00:00
['Technology', 'JavaScript', 'Software Development', 'Programming', 'Web Development']
497
[Geeky] RubyConf Summary (updated 2)
RubyConf 2010 was excellent. It was my first and probably not my last. Some general thoughts and then a master list of links (the real meat.) Unlike most all ‘business’ type conferences and talks I have gone to, the RubyConf style of presentation (and perhaps the Rails or Ruby community — I don’t know where it stops) is beautifully minimalist. Very few words on slides, large and impactful photographs. Is it because no one uses PowerPoint and few even use Keynote? I don’t know. But there were no text heavy, bullet heavy word slides. And also, lots and lots of code (that part is definitely a geek thing.) Here is a loosely organized set of links to the things that hit me most at RubyConf this year. It’s random and idiosyncratic (same thing?) but it reflects my real-time notes and feelings. These were all new to me but they might be old news to you, of course. These are my top links: nicksieger’s warbler at master — GitHub — This was one of my top 5 talks. Warbler is a tool to package a Ruby and/or Rails application into a single .jar or .war file to be run on any other computer that has a Java VM installed. If this works as advertised it is a very important bit of technology. I am going to try it out MacRuby: The Definitive Guide — Another one of my favorite talks, about MacRuby, a Mac OS X implementation of Ruby. From what I saw it looks very real, with good integration with all the Mac OS X libraries, and running on top of the Objective C runtime. It wasn’t totally clear what Apple’s posture is relative to MacRuby, but I hope it is positive. mattmatt’s showoff at master — GitHub — Showoff is a very neat presentation tool. It’s not an app like Keynote its actually a gem which processes a minimalist markup and serves it up in a local Sinatra server. Also easily lets you deploy your presentation to Heroku. Hmm. Is it mattmatt’s or schacon’s showoff at master — GitHub??? Git Wrangling — This was an intense and complex talk about Git. There is as we know, so much more in Git than most people use, and this talk touched into it. Also, Scott Chacon had a funny (but I think he was serious about it) interlude about how to be a gentleman. It includes the recommendation that a gentleman will always rise when a lady enters the room. Also check out these links with further details about the slides (really, they are good!). Ruby Mendicant University — A really inspiring talk by Greg Brown about his vision and mission to teach people about Ruby and programming. But this is not a fly by night little course, Greg has a big vision and is pursuing this in a comprehensive and highly innovative manner. I was very impressed and will follow and support Greg’s work. A great guy! And here are many more really good ones:
https://medium.com/pito-s-blog/geeky-rubyconf-summary-updated-2-d9e90f6324d1
['Pito Salas']
2017-06-08 19:23:10.385000+00:00
['Code', 'Rails', 'Ruby', 'Programming', 'Technology']
498
6 Tips for Developers to Handle Imposter Syndrome
6 Tips for Developers to Handle Imposter Syndrome The things that worked well for me “Every one of my successes is no big deal and due to luck.” “I feel like a fake because I still don’t know [xxx].” “Every failure is due to my lack of expertise and I should give up.” “I’m lacking experience in that topic, I better keep my mouth shut.” Hi! You are not alone… I went through that also and a lot of developers suffer from the imposter syndrome too! I will be honest with you. It took me more than a year to endorse the job title of developer. During my first year of employment as a developer, I’d never felt more of a fraud than ever before. Even though I had my share of knowledge, of course, every developer around me felt like they were way more talented than me. Which is a problem when you constantly try to compare yourself to others. Everyone seems better than you. By respecting my coworkers, my feeling was that I was not deserving of that title yet. How many days did I go home feeling like a fraud? A lot. Was it justified, at least once? Nope. Remembering this today really seems absurd. What was I thinking? Just because I don’t have the same expertise as they do, does it make me a fraud? Photo by Brooke Cagle on Unsplash Today, I feel way better about my knowledge. I’m fine with my current expertise, my learning curve, and don’t punish myself when I don’t know something. Here are some tips that helped me to overcome the imposter syndrome and I hope you find them interesting and helpful!
https://medium.com/better-programming/6-tips-for-developers-to-handle-imposter-syndrome-7473ea7924f6
['Thomas Guibert']
2020-01-28 19:42:52.129000+00:00
['Programming', 'Technology', 'Imposter Syndrome', 'Startup', 'Software Engineering']
499
CEEK VR Chosen as One of Top 10 Growth Startups Worldwide
CEEK Tokens are available on IDEX, BANCOR, COINSUPER AND LATOKEN CEEK VR is honored and excited to be chosen as one of the Top 10 Growth Startups selected from thousands of companies for the global Accelerate program. (CEEK was recently named one of 9 Blockchain and Cryptocurrencies to watch by FORBES) Accelerate is Startup Grind’s invite-only community designed to give fast-growth startups from across the world the tools and resources they need to accelerate their business. Startup Grind is the world’s largest community of entrepreneurs connecting across over 130 countries. According to Startup Grind, the team spent more than 500 hours interviewing thousands of outstanding startups that applied narrowing it down to the 50 Grind startups and 10 Top Growth companies that all showed high marks for potential across a combination of factors, such as founding team’s experience, product innovation and traction, among many factors. As part of the Top 10 Growth startups CEEK VR had the opportunity to present on the main stage of the global conference in Silicon Valley and exhibit CEEK products and services including CEEK 4D Audio Headphones, CEEK AR/VR Headsets and CEEK’s blockchain enabled Virtual Reality content marketplace featuring the likes of Lady Gaga, Katy Perry, U2, Orlando Magic eSports gaming and a variety of exciting VR experiences. We continue to integrate innovative blockchain-based features into our premium streaming VR platform for validation of content usage, authentication of celebrity merchandise and automatic content creator payments via smart contracts. CEEK’s CEO Mary Spio presented on the main stage, as well engaged in a panel discussion and Q&A. There was an estimated 8000 people in attendance. Post-Conference, CEEK will have access to Startup Grind’s new platform which is actively supported by its global community of investors, brands and entrepreneurs. CEEK Tokens are available on Idex, Bancor, LAToken, CoinSuper
https://medium.com/@ceek/ceek-vr-chosen-as-one-of-top-10-growth-startups-worldwide-84bec04af752
[]
2019-02-18 16:48:07.184000+00:00
['Virtual Reality', 'Top 10', 'Erc20 Token', 'Ethereum Blockchain', 'Blockchain Technology']