prompt
stringlengths 2
90
| completion
stringlengths 1
27.4k
|
---|---|
If banks keep cutting employees why do they still have so many staff? | While I realize this article is focusing on the more traditional "bankers jobs", every time I read an article about jobs at banks, I think of things I've been told by a few close friends who worked in the industry (back when my area had a few banks who's headquarters were local).<p>I am good friends with a couple of folks who worked in bank IT. Both have told me similar stories despite working for different companies and not knowing each-other, so I'm reasonably convinced of their accuracy. Though it might be something limited to these local banks (who were not huge players by any stretch), I get the impression it's a common thing. Both worked in IT, one in development and one in "everything except development and mainframe[0]". The non-dev guy told me that about every 6 months, the bank would axe an <i>entire department</i> -- and not just some oddball, unimportant department -- in one case it was a large group of people responsible for administration around loans and mortgages. This department would <i>cease to exist</i> despite being once considered a necessity (and arguably still a necessity). Three months later, they'd hire in a bunch of folks and create a department that was effectively doing what the axed department was doing (sometimes under the same name and staffed with many of the same people).<p>Non-core departments like HR and IT would be ebb and flow in radical ways -- my developer friend was let go when they just decided they were going to pay a company in India to build all of their apps and abandoned the local Java developers who were hired on to reduce their dependency on the mainframe, which never happened.<p>The environment they described sounds like a caricature of "Office Space". There was one case where a guy worked a pay-period too long and discovered he was supposed to have been let go only when his direct deposit didn't appear in his bank account and one case where they had to sue a former employee because he was notified 5 years ago that his position was cut, he'd stopped coming into work, yet they continued to pay him and he continued to cash the checks.<p>They've both been out of "Bank IT" for a long time, now, but I'm fairly certain these issues persisted at least until 2010. I recall one of the banks, when they started offering "Online Banking", required you to use your social security number as your logon ID and limited passwords to case-insensitive, letters only, with a 10 character maximum. It just had the smell of being a clear-text password stored in a varchar(10) field on a database without case sensitivity turned on. And companies like American Express were (are?) doing things almost as badly.<p>Because of all of this, I have a feeling the article has some things wrong. My sense is that the banks these guys worked for never really had a net job loss, they just operated in a revolving-door fashion -- axing departments while increasing headcount in others, and then repeating every few months with the departments names' being pulled out of a hat. I don't doubt that large sums of money are thrown into automation, particularly around algorithmic trading, because it directly makes the bank money (but I doubt that's happening at any of the smaller banks like the ones that my friends worked for).<p>From what I understand, the same people that were let go from IT in the last round of layoffs are the only ones they can get to apply for the new jobs[1]. My buddy in IT -- who saw his department go from 10 employees to ... him and his boss ... ended up being re-staffed a few months later with five folks who had just been let go from a local competing bank and worked in IT for his boss at some point in the past... it was a revolving door with the same people going in and out of it.<p>[0] And it was quite literally <i>everything</i>. This was early 2000 so that included the handful of traditional racked servers, routers, and the small number of endpoint-desktops connected via 16Mb Token-ring and given mostly to executives for them to use the screen-savers, Lotus 1-2-3 and terminal program to access the mainframe. He was supposed to be responsible only for the servers/network, but often found himself stuck working on the help desk and doing end-user support due to staffing constraints.<p>[1] Prior to meeting these two fine gentlemen, I applied for a job that I eventually turned down because I was so disappointed with the quality of individuals that interviewed me. The woman I talked to couldn't hide her disappointment, but there was something off that I later came to believe could be summed up by the phrase "yeah, not surprised". |
The Disappearing American Grad Student | Great article and conversation. The important factor for average US undergrad student not opting for graduate school is definitely due to the astronomical loans they accumulate during undergrad years.<p>I simply couldn't for the life of me comprehend why majority of US undergrads are OK taking on such a huge amount liability and jeopardizing their future prospect? It perhaps could be because "Everyone is doing it" or because "There are no other avenues.." I believe the system is setup in such a way that it's unavoidable and unfair to the students in the name of getting an "education".<p>Here is a different path/perspective.<p>I came to the US from one of the poor south Asian countries at 18 to do my undergrad despite my parents telling me to go for my graduate school in the early 2000s. For grad school you don't pay tuition and you are paid for your expenses too working on a research or as a TA. I said to myself, US is the land of opportunity, possibilities are limitless, go early and figure out the specifics later. I was very young so yeah "let's do it" attitude prevailed. My parents were not wealthy either but they had assets enough to cough up tuition fees. I intentionally selected a small public university in the middle of nowhere that offered partial scholarship (more on this later). My tuition fees per year was 9k after scholarship if I remember correctly.<p>After I landed, I registered for 15- 18 credits per semester and also took a job part time and that was enough to cover my living expenses and then some. I believe poverty or coming from a background of family without a lot of means or a support system to fallback on is a great motivating factor to not accumulate any debt. I had to hustle pretty much throughout my undergrad years, if I wasn't in a class, I'd be in the library doing assignment and if I wasn't in the library I'd be at my job. I wasn't at a frat house partying or playing sports or joining meetup groups or doing the normal things any regular undergrad students would do. I consider myself an average student too, nothing close to the best and the brightest but managed to maintain a 3.7 GPA and graduated with a Computer Science degree.<p>In my last semester, I did two internships (Summer/Fall) in small companies because when I went to couple of job fairs just to check it out, I noticed that companies really wanted to see some hands on experience. I couldn't believe they would pay for internship too and better than my part time job. I worked out a deal with my professor to allow me to work in my final semester at a local company which counted towards a seminar credit too.<p>When I graduated, I had 0 debt, three job offers from mid size companies starting at 50k - 65K in a mid size market (not Silicon Valley). My colleges at work include Ivy League graduates along with those from top private/public colleges that cost 30k - 80k per year in tuition and they were all in student debt. So in hindsight, I think I made a good choice going to a middle of a nowhere school. It really made no different in getting a Job or acquiring skills. Sure I didn't have a brand recognition but I could hold my own during the the interviews to get offers.<p>Once again, I'm not the smartest guy but I had strong determination not ask my parents for money on take on loans. Whatever, they offered me initially for tuition, I paid back after I started working full time and then some. Someone pointed out about culture in Asian family where parents do anything they can to invest in their Child's education and that's because it's what they believe is a way to a better life.<p>After being in the industry for couple of years, I wanted to increase my depth/breadth of knowledge in my field so I found a Masters program that I could pursue part time (nights/weekend). I got my new employer to pony up half of the tuition (I tried for full tuition reimbursemnt) as well which I required to accept their job offer. Total cost was $20k tuition for 3 years at a so called "top 20 nationally ranked engineering school." I could care less about the ranking...<p>By 27 I had both undergrad/masters and 5+ years of experience in the industry. Once again I had no debt.<p>I believe the system is setup in a such a way that it's so easy to take on a huge loans for the majority of population without fully understanding the implications for the future. I mean kids are 18. It feels as if it's almost similar to using one credit card to pay for the minimum payment of another credit card.<p>Also cost of education keep on going higher and higher. I think anytime government gets involved the price balloons astronomically. Take a look here <a href="https://ourworldindata.org/grapher/price-changes-in-consumer-goods-and-services-in-the-usa-1997-2017" rel="nofollow">https://ourworldindata.org/grapher/price-changes-in-consumer...</a> #1 is college tuition followed by Medicare and child care.<p>I think 20 years from now, it really wouldn't matter which school you go to as the market is going to put higher emphasis on, what you have done or can do (skill wise) as opposed to which school you go to. Until then us US tax payers are going to foot the bill for all these student loans that are eventually going to blow up like the housing bubble.<p>So I leave you with some food thought, if a kid from a very poor country can come to the US with little to no means but dedicated enough not take any loans to pursue higher education and pursue the American dream, I think it should be far more easier to native US born students. |
Ray Kurzweil thinks we could start living forever by 2029 | Kurzweil's schedule is unrealistic, but his basic outline of how the job gets done is an accurate portrayal of one path ahead to physical immortality.<p>I wrote this a few years back:<p>------<p>There exist a growing number of people propagating various forms of the viewpoint that we middle-aged folk in developed countries may (or might, or certainly will) live to see the development and widespread availability of radical life extension therapies. Which is to say medical technologies capable of greatly extending healthy human life span, probably introduced in stages, each stage effective enough to grant additional healthy years in which to await the next breakthrough. You might think of Ray Kurzweil and Aubrey de Grey, both of whom have written good books to encapsulate their messages, and so forth.<p>Some people take the view of radical life extension within our lifetimes at face value, whilst others dismiss it out of hand. Both of these are rational approaches to selective ignorance in the face of all science-based predictions. It usually doesn't much matter what your opinion is on one article of science or another, and taking the time to validate science-based statements usually adds no economic value to your immediate future. It required several years of following research and investigating the background for me to feel comfortable reaching my own conclusions on the matter of engineered longevity, for example. Clearly some science-based predictions are enormously valuable and transformative, but you would lose a lifetime wading through the swamp of uselessness and irrelevance to find the few gemstones hidden therein.<p>As a further incentive to avoid swamp-wading, it is generally well known that futurist predictions of any sort have a horrible track record. Ignoring all futurism isn't a bad attention management strategy for someone who is largely removed from any activity (such as issuing insurance) that depends on being right in predicting trends and events. You might be familiar with the Maes-Garreau Law, which notes one of the incentives operating on futurists: 'The Maes-Garreau Law is the statement that "most favorable predictions about future technology will fall within the Maes-Garreau Point", defined as "the latest possible date a prediction can come true and still remain in the lifetime of the person making it".'<p>If you want to be a popular futurist, telling people what they want to hear is a good start. "You're not going to be alive to see this, but..." isn't a compelling opening line in any pitch. You'll also be more convincing if your yourself have good reason to believe in your message. Needless to say, these two items have no necessary relationship to a good prediction, accuracy in materials used to support the prediction, or whether what is predicted actually comes to pass. These incentives do not make cranks of all futurists - but they are something one has to be aware of. Equally, we have to be aware of our own desire to hear what we want to hear. That is especially true in the case of predictions for future biotechnology and enhanced human longevity; we'd all like to find out that the mighty white-coated scientists will in fact rescue us from aging to death. But the laws of physics, the progression of human societies, and advance of technological prowess don't care about what we want to hear, nor what the futurists say.<p>I put value on what Kurzweil and de Grey have to say about the potential future of increased human longevity - the future we'll have to work to bring into being - because I have performed the due diligence, the background reading, the digging into the science. I'll criticize the pieces of the message I don't like so much (the timescale and supplements in the case of Kurzweil, WILT in the case of de Grey), but generally I'm on board with their vision of the future because the science and other evidence looks solid.<p>But few people in the world feel strongly enough about this topic to do what I have done. I certainly don't feel strongly enough about many other allegedly important topics in life to have done a tenth as much work to validate what I choose to believe in those cases. How should one best organize selective ignorance in fields one does care about, or that are generally acknowledged to be important? What if you feel - correctly, in my humble opinion - that engineered longevity is very important, but you cannot devote the time to validate the visions of Kurzweil, de Grey, or other advocates of longevity science?<p>The short answer is trust networks: find and listen to people like me who have taken the time to dig into the background and form our own opinions. Figuring out whether ten or twenty people who discuss de Grey's view of engineered human longevity are collectively on the level is not too challenging, and doesn't require a great deal of time. We humans are good at forming accurate opinions as to whether specific individuals are idiots or trustworthy, full of it or talking sense. Fundamentally, this establishment of a trust network is one of the primary purposes of advocacy in any field of endeavor. The greater the number and diversity of advocates to have taken the time to go digging and come back to say "this is the real deal," the more likely it is that that they are right. It's easy, and probably good sense, to write off any one person's views. If twenty very different people are saying much the same thing, having independently come to the same viewpoint - well, that is worth spending more time on.<p>One of the things I think we need to see happen before the next decade is out is the establishment of more high-profile longevity advocates who discuss advancing science in the Kurzweil or de Grey vein: nanotechnology, repairing the molecular damage of aging, and so on. Two, or three, or five such people is too few. |
Almost everything on computers is perceptually slower than it was in 1983 | Indirectly related...<p>I am the one-man team for the CLI (Command-Line Interface) for a new cloud computing PaaS.<p>This is a small project (10+ employees, mostly software engineers) on a more extensive company (500+ employees).<p>Sometimes I have a [really] hard time dealing with management and co-workers because they expect me to mimic the behavior of our GUI console (web-based) with a high fidelity level that many times is just inappropriate for the CLI or not what you want on a CLI.<p>Most of this is caused by lack of experience on both using and writing CLI tools (I happen to have way more experience than them in this field because I am a long-term heavy Unix user - since a kid, besides having a greater understanding of Unix programming in general). I wish they at least read "The Art of Unix Programming" by Eric Raymond to better respect some design decisions I made.<p>I am very open to criticism and even enjoyed the unexpected help I received from the project UX / UI designer, that was enlisted to help me out (except for things I clearly decided against but were kind of forced on me).<p>Here is an incomplete list of things I had to deal with related to performance/usability:<p>* (lesser problem) backlash against language: Go was my language of choice for this, many of the team would instead use Node.JS<p>* lack of understanding that I have to support a vast amount of configurations: BSD/macOS/Linux/Windows + different shells + terminals (and it gets worse / weird when you talk about Windows), and that somethings that look nice in one system, might not look the same in most terminals. Also, many times it is difficult to predict with accuracy what terminal is in use (I am looking at you Windows Subsystem for Linux, that pretends it is xterm-2565, while not supporting most things of it).<p>* colors, colors, colors. The designer wanted colors everywhere. Foreground, background, Alpha-RGB. You can not imagine how hard it was to convince that we better stick with ASCII standard 16 colors and that alpha channel didn't even make much sense, at least while I am alone<p>* Explaining why we better stay away from new UTF-8 icons "that only renders nice on macOS iTerm"<p>* Explaining why colors look different in other terminals (say, Hyper)<p>* Explaining that I have no control over the screen background of the user whatsoever and can't even detect it appropriately / would not be nice to do so<p>* Animations, animations, animations. Why do they expect animations everywhere? How do I explain to them that if I try to open an URL on the browser it opens kind instantenously, and it needs no animation because the user wouldn't even see it (and then being told to just put a small delay before opening for it to appear to happen... fast)?<p>* Decorations, decorations, decorations. I lost this battle and the error handling mechanism prints a weird symbol on every line of error. I am actually stripping the decorations a few pieces at a time and intend to eventually get rid of it just by being a rebel and removing it kind of "accidentally" (especially when it gets in the way).<p>* Lack of understanding of Unix software programming patterns (filter program, pipes, redirections, etc -- see The Art of Unix programming) [and why I don't accept passwords to be passed as an argument as this is... soooo easy to use™!]<p>* No respect for spacing on a listing of entries: hard to defend that tabular data must be presented as tabular (even though it might get repetitive) or at least have easily parsable separations. One example is a proposal of listing services while showing their projects, and only print the project name on the first line of service [and assume the project above for all others]. This is a deal breaker for pipping or filtering out data most times, and people just want me to ignore it and feel disappointed with me when I do the right thing (ignore their opinion instead).<p>* I was demanded to add heuristics to a status system due to lack of proper feedback about remote actions such as deploying/restarting/stopping services. This would never work very well because of many things, such as concurrent calls, eventually request just took longer or happened in practice in 'the wrong order', and so on. Took more than six months until I got a decent API change where I could trust the status and more six months for a more CLI-like approach of not doing more than one thing at once to be agreed upon (instead of always trying to replicate the GUI [blocking] behavior).<p>Many of these things above I managed to ignore, did in a way that was easily revertable, or just accepted for the sake of my well-being [and wished/planned/plan to move to a more sane approach soon]. |
What You Need to Know About Mozilla's New Firefox Browser Coming Soon | > Mozilla will flip the switch on a completely new browser with the release of Firefox 57<p>It's not a completely new browser. It's still Firefox with some components swapped out, a lot of cruft taken out (due to cutting support for legacy extensions) and lots of individually small performance fixes (which were mainly made possible thanks to the recent rollout of multiprocess).<p>These new components have been taken from Servo, which is an actually new, written-from-scratch browser engine, but which is still far away from being production ready (might never be), so they can only retrofit finished components for now.<p>> Firefox 57 is the end of the road of a few Mozilla internal projects that aimed to fix most of Firefox's previous problems<p>Not the end of the road. Just a major milestone with PR drumrolls. More is very much already scheduled for the next few releases, the biggest of which will be WebRender.<p>> meaning developers of some Chrome extensions may soon port their add-ons to work on Firefox.<p>As is written directly below that, this API has been shipped in Firefox for a while already, so lots of Chrome extensions have already been ported.<p>> These users will be put in the unfortunate situation of having to choose between running Firefox without their favorite add-ons or finding a new browser where similar add-ons exist.<p>Or switching to Firefox 52 ESR, which will receive security updates until June 26, 2018 and continue to support these legacy extensions until then. Not a final solution, but it might bridge the gap until viable alternatives have been created.<p>Also, switching to a different browser will rarely make sense, as Firefox still is the most extensible browser by a long shot.<p>> Firefox 57 will also be the first Firefox version to support the new Quantum engine. Announced last year, the Quantum engine replaces some parts of the old Gecko engine with new components written in a mixture of Rust and C++.<p>"Quantum" is marketing kerfuffle from Mozilla. The engine is still called Gecko, it's just getting some components swapped out. Again, the components came from Servo, which is written in Rust. I am not aware of major components being included that have C++ code.<p>> Firefox 57 will include more Project Quantum code, such as Quantum Render, a brand new, GPU-optimized rendering pipeline based on Servo’s WebRender project<p>As I already mentioned above, WebRender is not in 57, but should hopefully land in the next few versions.<p>> Mozilla claims that all these changes have resulted in considerable speed boosts to Firefox's boot-up and browsing behavior, although this claims should be taken with a grain of salt, as all browser makers say the same thing when they launch a new version.<p>Or you could look at the fucking thing and notice that, oh hey, it is actually massively faster than previous Firefox versions. You don't even need benchmarks for that, it's clearly visible.<p>Some of the changes that are kind of counted into Quantum have been happening since Firefox 48 already, which is also why Mozilla's claim was that 57 is twice as fast as <i>52</i>, so yeah, it won't be as significant of a difference going from 56 to 57, but it should still be relatively obvious.<p>I won't blame the author for misunderstandings or not being aware of every little detail, but to write something like that and not even test the two versions against each other is just lazy and bad journalism in every way.<p>> Also starting with Firefox 57, new Firefox installations will disable the search widget that used to appear in the top right corner of the old Firefox UI, an iconic part of the browser's old interface.<p>They did plan this, but apparently lots of people do actually use the search bar, so they've decided to put it back in by default.<p>> The planned UI changes are most likely to annoy some Firefox users because the Classic Theme Restorer Firefox add-on will also stop working, meaning users won't be able to control how their browser UI looks.<p>Or to 1) increase performance, the new UI is more efficient, 2) reduce maintenance cost, the new UI uses vector graphics for most things allowing it to be reused across OSs, 3) be more flexible, the vector graphics also allow for Mozilla to ship a Compact-mode and a Touch-mode out of the box, 4) <i>look</i> faster, the animations keep your brain busy and make it feel like things are faster, 5) help with marketing, a new UI is a far more visible change than a higher score on some obscure benchmark and will attract press coverage.<p>Besides that, Classic Theme Restorer is already broken anyways due to legacy extensions being removed.<p>And the new UI is actually more "classic" than Australis (square tabs, back/forward/refresh not glued to the URL-bar, dropdown-menu-like Hamburger-menu as opposed to the touch-optimized Hamburger menu in Australis etc.), so while Classic Theme Restorer did do lots more than that, the new UI should make a lot of users of CTR happy just by not being as unconventional as Australis. |
History of AOL Warez | AOHell was just one of the first visual basic applications for manipulating AOL.<p>There were at least a dozen well known ones used for different purposes.<p>Some of them were for tormenting people by sending automated IMs, some were for flooding mailboxes and some were for "phishing" but they were terrible; all they would do is open an IM to a random member and dump in a preset block of text that tried to convince them to hand over their password, and the scripts were bad. They were obviously written by young teenagers.<p>These tools often had a small user interface that floated over the AOL window. They had panels that would slide out to perform various functions.<p>The most popular tool for a while was Super Mad Cow.<p>Super Mad Cow was a swiss army knife of AOL tools in one application.<p>Here are some of the functions it had that I remember:<p>* Phisher - You could set up macros that would randomly choose and IM accounts with preset blocks of text to try to social engineer people to give you their password or create a subaccount for them.<p>* Forwarder - The application would open your mailbox, index all of your mail and then present an interface with checkboxes that you could multi-select to mass forward mail.<p>* Lister Bot - In order for this to work you have to know that AOL had no limit on the size of your email inbox, so you could have gigabytes of data stored in your own inbox.<p>This function would take a CD image, large zip or rar file, split it into small chunks of a few megabytes, attach each chunk to an email to yourself and tag the subject of each email with a serial number so lister clients could identify all the chunks for that file, download them and re-assemble them. This way you could have dozens of CD images, floppy images and other archives in your inbox that you could trade with other "couriers" (people who traded warez).<p>The lister bot would automatically scan your inbox and make a list of all the packages you had, including packages that had been forwarded to you by other bots, then it would create an email with an index that you could forward to other screen names.<p>The lister bot also had a chat bot function where you could enter a chat room and it would announce itself, and if someone typed a special keyword the bot would email them the list of your warez. The bot would also announce a specific list of hot items you had and chat participants could request those items straight away. Then that person could type a special keyword followed by the serial number of the package they wanted and your list server bot would automatically forward all the parts of that package from your inbox to the requesting screen name.<p>The requested packages would arrive in the recipient's inbox and could be downloaded instantly, because you were just forwarding emails -- the way AOL forwarded emails inside their own system is they would just copy the files directly into your mailbox via their own filesystem, so there was virtually zero wait time.<p>* Hidden keyword menu - AOL employees occasionally created silly content that could be accessed using unlisted keywords (the infamous cop eating a donut keyword comes to mind). The authors of Super Mad Cow would update the known hidden keywords with each new version and present a menu of them for easy access.<p>* Mailbox manager - The mailbox manager would create a personal index of all the packages in your inbox and let you perform maintenance tasks on them at the package level, such as deleting and forwarding, without having to work with the individual emails that had chunks of the packages in them. It would also read emails from your other subaccounts with commands to forward packages to them, then it would delete the main copy so you could manage different kinds of packages using different subaccounts. For example you could have one subaccount for trading ebooks, one for music, one for games and one for porn.<p>* List parser - The list parser would enter known warez chat rooms and watch for the LIST keyword. This was a keyword used by lister bots that indicated they were about to either dump a list of warez or offer to forward their list. The list parser would either read the list from the chat room and present a list of packages you could request with checkboxes and then automatically request all the necessary parts of the package for you, or it would request an email with the package list, open your inbox, parse the list and then do the same. Later on, list parsers gained the ability to scan your inbox to see if you already had some chunks of a package and only request the missing chunks. |
Tom Duff on Duff's device (1988) | NeWS had some spectacular abuses of C preprocessor macros with nested switch statement, loops and gotos.<p>Especially the conic splines code by Vaughan Pratt and James Gosling, which "flattens" Bezier curves into conic splines (unlike straight lines like most PostScript renderer implementations), which Vaughan Pratt's conix package can draw really fast.<p><a href="http://www.chilton-computing.org.uk/inf/literature/books/wm/p005.htm" rel="nofollow">http://www.chilton-computing.org.uk/inf/literature/books/wm/...</a><p>SunDew - A Distributed and Extensible Window System<p>[...] Two pieces of work were done at SUN which provide other key components of the solution to the imaging problems. One is Vaughan Pratt's Conix [53], a package for quickly manipulating curve bounded regions, and the other is Craig Taylor's Pixscene [63], a package for performing graphics operations in overlapped layers of bitmaps. [...]<p>[...] Pixscene is based on a shape algebra package. The ability, provided by Conix, to do algebra very rapidly on curves should make non-rectangular windows perform well. [...]<p>[53] Pratt, V., Techniques for Conic Splines, Computer Graphics 19(3), pp.151159 (1985).<p>[63] Taylor, C. and Pratt, V., A Technique for Representing Layered Images. SUN Microsystems Inc., 2550 Garcia Avenue, Mountain View, CA94043.<p>Direct Least-Squares Fitting of Algebraic Surfaces:
Vaughan Pratt,
Sun Microsystems Inc.
and
Stanford University.
April 30, 1987.<p><a href="http://boole.stanford.edu/pub/fit.pdf" rel="nofollow">http://boole.stanford.edu/pub/fit.pdf</a><p><a href="http://donhopkins.com/home/code/NeWS/SUN/src/server/graphics/pathtoarc.c" rel="nofollow">http://donhopkins.com/home/code/NeWS/SUN/src/server/graphics...</a><p><pre><code> /*-
Convert a PostScript style path to an arclist
pathtoarc.c, Thu Sep 19 15:45:44 1985
*/
</code></pre>
<a href="http://donhopkins.com/home/code/NeWS/SUN/src/server/graphics/arctochain.c" rel="nofollow">http://donhopkins.com/home/code/NeWS/SUN/src/server/graphics...</a><p><pre><code> /*-
Convert a conic arc to a curve
arctochain.c, Tue Jun 11 14:23:17 1985
"Elegance and truth are inversely related." -- Becker's Razor
Vaughan Pratt
Sun Microsystems
This code is a version of the conix package that has been
heavily massaged by James Gosling.
*/
</code></pre>
[...]<p><pre><code> /* macros for the conic machines */
#define out(b) { \
if (b) \
xpos += dir; \
else \
*ypos++ = xpos; \
}
#define across(rel,q1) { \
out(1); \
Fx += Fxx, Fy += Fyx; \
if ((F += Fx) rel 0) \
goto q1; \
}
#define down(rel,q1) { \
out(0); \
if (--rem0 < 0) \
goto final; \
Fx -= Fxy, Fy -= Fyy; \
if ((F -= Fy) rel 0) \
goto q1; \
}
#define trans1(axis, q1) \
for (;;) \
axis(>=, q1);
#define trans2(axis, q1, rel, q2) \
for (;;) { \
axis(<, q1); \
if (Fx rel Fx_err) \
goto q2; \
}
</code></pre>
[...]<p><pre><code> /* enter conic machine at appropriate state */
rem0 = size.y;
if ((size.x > 0) == (side < 0))
goto EOD;
else
goto OED;
/* OE state machine */
ODE:trans1(across, OED);
DOE:trans2(across, ODE, >, OED);
OED:trans2(down, ODE, <, DOE);
/* EO state machine */
EDO:trans1(down, DEO);
DEO:trans2(across, EDO, >, EOD);
EOD:trans2(down, EDO, <, DEO);
final:;
/*- clearasil(buf0+1, bitlen); /* clean out 0011/1100 pimples */
assert(ypos <= curve->x + (curve->y1 - curve->y0 + 1));
</code></pre>
The main loop of the NeWS PostScript interpreter itself is the realization of Duff's dire warning "Actually, I have another revolting way to use switches to implement interrupt driven state machines but it's too horrid to go into."<p><a href="http://donhopkins.com/home/code/NeWS/SUN/src/server/nucleus/PostScript.c" rel="nofollow">http://donhopkins.com/home/code/NeWS/SUN/src/server/nucleus/...</a><p><pre><code> /*
* This is the PostScript interpreter. Its implementation breaks almost every
* rule of structured programming. This is a result of needing to support
* multiple PostScript processes and a paranoia about performance. There are a
* couple of #includes that you should think of as procedure calls. Life is
* tough in the trenches.
*
*/
/*
* The PostScript interpreter has a fairly simple basic structure:
*
* while (1) { 1. Pick up an object to interpret 2. Execute it }
*
* Part 1, picking up an object to interpret is handled by the statement that
* starts "switch (es->type)". es points to the top of the execution stack: a
* stack of sources of executable objects. The switch determines what kind of
* thing the object is being fetched from: an array, a string, ...
*
* Part 2, executing an object, is handled by the code that starts with the label
* "execute_top". It does some tests and switches on the type of the object
* being executed.
*
* All of this is complicated by the mechanism for blocking. If the process
* needs to block, say by needing to read a new buffer from a socket or by
* needing to wait for a message, then the interpreter needs to stop working
* on this process and leave information around that lets it restart. This is
* the "restart_state" variable: it is used as the index to a switch statement
* that is executed when the interpreter starts up, so it won't necessarily be
* intering the main loop of the interpreter at the top. There are cleaner
* ways of doing this, but they can't be written in C!
*/
</code></pre>
[...]<p><pre><code> /*
* Restart the interpreter where it left off last
*/
switch (ee->restart_state) {
case 1:
goto read_1;
</code></pre>
[...]<p><pre><code> case 24:
goto read_24;
}
/* The main loop of the interpreter */
while (1) {
struct object new;
</code></pre>
[...]<p><pre><code> /* Pick up an object to execute */
switch (es->type) {
</code></pre>
[...]<p><pre><code> case file_execution:{
register PSFILE *f = es->env.file.file;
/* be charitable and think of this as a procedure call */
#ifdef GPROF_HOOKS
/* gP_GetFile defined in parse_file.h */
#endif
#include "parse_file.h"
break;
}
</code></pre>
<a href="http://donhopkins.com/home/code/NeWS/SUN/src/server/nucleus/parse_file.h" rel="nofollow">http://donhopkins.com/home/code/NeWS/SUN/src/server/nucleus/...</a><p><pre><code> /*-
* Parse a PostScript object from a PSFILE
*
* parse_file.h, Mon Dec 31 09:36:02 1984
*
* James Gosling,
* Sun Microsystems
*/
/*
* This is a version of getc that pauses if the buffer is empty and suspends
* the process
*/</code></pre> |
Startup Mistakes: Choice of Datastore | Wow. This fell off the front page in the time it took for me to drive to work. Guess the NoSQL crowd ain't got time for that.<p>While I'm generally sympathetic to your post, there are some things that are red flags.<p>>If you have ten services accessing the same database and sharing data between themselves<p>Whoever access the database schema owns it. If you have ten systems accessing your database then ten teams own it. And if ten teams own it then nobody owns it. Nobody can change it. Seen this at a successful start-up that got big, but then couldn't rev order management, because every team had their finger in the pie, we couldn't do a schema update without breaking everyone, and of course one team was "under tremendous pressure to hit a major milestone and we just can't do that now" for over a year.<p>Now I would turn this around into a win for RDBMS by suggesting the use of functions or stored procedures: with an RDBMS we can construct an API, and then we can version those APIs. And then the team that owns the database can do what they like. That said, we can do the same with NoSQL databases by not allowing other teams to access them. The team that owns the NoSQL database is required to maintain an API for it.<p>I've only ever had nightmares with other teams coding against my schema. GraphQL worries me in that respect and I'd love to hear how people here have fared with long lasting GraphQL, in the real world.<p>>Django, for example, makes migrations trivial, as you just change your application-level classes and the database gets migrated automatically.<p>I've had automated migration systems grind to a halt and leave the DB fucked too.<p>>Priscilla used a graph database when her data was relational. Her husband, furious, filed for divorce.<p>There are no schema that are "relational" but not "a graph". However there are plenty of schema where a graph database is a natural fit but that require either one-table-per-node-type or building a graph model on top of your RDBMS (e.g. an Entity-Attribute schema). Oy.<p>There are also many schema where there is only one entity type, but every join is against itself, and we're looking to join all the way out to the clique. In this case would you suggest an RDBMS, and then put the iteration in the application? You suggested earlier that making up for the inadequacies of the datastore in the application is a bad idea.<p>I've got a graph application that uses NoSQL and it was the right call. An RDBMS would have allowed us to write something that worked for simple cases, but that would have brought the system to its knees based on some customer usage. The solution for the RDBMS would be the same as what we had to do for NoSQL. But up to that moment, the NoSQL allowed us to iterate far faster than an RDBMS.<p>>For example, if you later need to compile a list of all the brands of all the products on your store, an RDBMS can easily do that by reading the “brands” table<p>Only if you built a brands table. You can't argue that we can't predict the future, so use an RDBS because its easy to change, but then make arguments that require that the builder accurately predicted the future and built the schema with that foresight. Sure, we could go and pull a brands table out of the existing tables but thats work, and it might be work on a live database that brings it down.<p>A graph database would be just as likely to have 10 brand nodes since the overhead of creating the first such node is far lower than creating an entire table and updating the schema.<p>>Relational databases excel at easily providing answers to questions that weren’t predicted at the time when the data model was designed.<p>Or a NoSQL database with spark. "But spark is something new to learn"<p>And this is the biggest flaw in your argument. You're pro RDBMS because you know SQL and how to run RDBMS, create the schema, and write the queries. It is incredibly easy to get started with MongoDB. That right there is why it is popular. Not because its good. But when you say "Just get something started and use an RDBMS", you're actually saying "I know you know javascript, but I need you to learn Modula-2 for this part of the system". Fundamentally different syntax and strict types (or "schema").<p>>For all its unparallelizability, ACID is pretty damn nice.<p>Until someone holds transactions open across network calls and kills throughput. I've seen that issue lose a company a multi-million dollar contract because of contention on a single row. Or until someone chooses the wrong isolation level ("But I used a transaction!") and two transactions happily decrement non-atomically. This shit be hard, and part of the "hard" is not knowing you're doing it wrong.<p>>You’ll have plenty of time figuring out what to use when you know your exact usage patterns, if the business manages to not die until then.<p>So start with something quick and easy then. You've already managed to describe two scenarios where an RDBMS blew up in practice (multiple teams hitting a DB causing schema lock; its a network, not a flat hierarchy).<p>If you have an experienced SQL team, by all means go with RDBMS. But lets not pretend they are a panacea. Honestly, I'd use a graph database pretty much all the time - <i>if I could only trust them</i>. But its the quality, reliability and longevity that I have a problem with, not with the nature of how the database organizes data. |
Tesla Semi | I was wondering if the best early markets to target might be in the European Union. First of all, at least on the PR-side of things we seem to be a bit more into sustainability; more companies might want to get these just to get a good public reputation. But aside from that it might make more economic sense too.<p>So I tried a rough estimation.<p><i>TL;DR:</i> I made a very, <i>very</i> rough estimation which suggests that <i>if Tesla's original cost per mile number is accurate</i>, it could be <i>very</i> successful in a handful of European countries, for companies that do not transport cargo internationally.<p>Making worst-case assumptions about the average costs at each step, switching to the Tesla Semi would be a net <i>increase</i> in costs of 19¢/mile on average.<p>Making slightly-more-lenient assumptions about electricity costs, switching would give a 28¢/mile savings on average (compared to their claimed 25¢/mile in the USA).<p>The best-case scenario for Tesla would be a trucking company in Norway that only delivers inside Norway, because then that number improves to somewhere between 59¢/mile to 80¢/mile.<p>Also, they need to give more context for the $200k+ fuel savings claim, because it seems a bit suspicious: they claim a savings of 25¢/mile, but fuel cost is only a fraction of that. So you would need to travel like a million miles before you reach that amount, which would take over two to three decades at the average distance covered by a truck each year.<p>I'm not an economist, and a lot of ridiculous assumptions were made in these calculation, so don't take it too seriously.<p>Still reading? Down the rabbit hole we go...<p>In the reveal video Tesla claims a cost of $1.51/mile for diesel vs $1.26/mile for their semi[0], so a 25¢/mile savings (I'm going to ignore the convoy savings for now). This assumes fuel costs of $2.50/gallon and 7¢/kwH.<p>Oh, BTW: the estimated average miles per year is 45k in the US[1]. So at 25¢/mile that would be $11.25k saved per truck in the US on average - which presumably includes costs saved on repairs. Again: where does that $200k+ fuel savings number come from?<p>Now, in Europe gas prices are (on average) much higher than the USA. What would be the price per mile here? Well, to estimate that that we must also look at how the price of electricity compares between Europe and the USA.<p>For diesel, I looked at globalpetrolprices.com[1][2]. It does not provide an average for all of the EU (nor prices per US state, for that matter). Truckers will often plan in such a way that they fill up in the countries on their route with lower gas prices, so let's err on the side of caution and make it $5/mile. To compare the relative difference I'll take the current average US price, which is $2.83/mile. So gas is 1.8x more expensive at the moment, on average.<p>Now, we do not know the MPG Tesla assumed for diesel trucks. Lets go with the worst-case comparison again, which would be 8 MPG. Increasing $2.83/gallon to $5.00/gallon, that mean be an increase of 27.12¢/mile. Making the improbably assumption that all other costs relevant to this calculation are equal, we can just add that to Tesla's number of $1.51/mile, and end up with $1.78/mile for diesel in Europe.<p>For electricity I looked up the official governmental statistics provided for electricity USA and European Union[3][4]. Taking the provided national averages, and converting the Euro to US dollars at the current rate of 1.00 to 1.18, I get these numbers:<p>EU: 15.51¢/kWh domestic, 8.01¢/kWh industrial (second half 2016)<p>USA: 12.90¢/kWh residential, 9.89¢/kWh transportation, 7.23¢/kWh industrial (August 2016 - more recent statistics exist but I figure we should compare the same time period)<p>Note that these numbers make Tesla's claim seem a bit fishy, since they assumed 7¢/kWh for the USA, which you don't even get with industrial scale costs.<p>Europe does not distinguish transportation from other sectors yet, but the worst-case scenario would be using their domestic prices compared to the USA's transportation prices. That would be about 1.5x more expensive, still not as much as diesel.<p>(I added industrial electricity prices because perhaps a large transportation company with a fleet of electric vehicles and their own charging stations would use so much electricity that they would need industrial energy contracts. At that point the comparison obviously gets a whole lot rosier)<p>We don't know how much of a factor fuel price is in the price per mile calculation of the Tesla Semi. If we take their claim of lower maintenance costs at face value, it should be a bigger portion of the total. Now, the even-worse-that-worst case scenario would be assuming it's <i>all</i> of it, since that would give the biggest adjustment, and that the relative difference is equal to the domestic price difference:<p>$1.26/mile * (15.51¢/kWh / 9.89¢/kWh) = $1.97/mile, for a net <i>increase</i> of 19¢/mile.<p>If we take the relative difference between domestic prices as our starting point, it becomes better again:<p>$1.26/mile * (15.51¢/kWh / 12.90¢/kWh) = $1.51/mile, for a net savings of 28¢/mile.<p>So, using the worst possible adjustment for diesel and electricity prices, worst MPG, and worse-than-worse price-per-mile adjustment for Tesla, the economic benefits of switching to these will still be bigger in Europe.<p>Now lets look at the country with (probably) the best numbers for Tesla: Norway. Despite being an oil-exporting country, it has the highest price per gallon (except Iceland), and lower price per kWh than the USA:<p>Diesel: $6.99/gallon, electricity: 11.30¢/kWh domestic, 6.28¢/kWh industrial<p>Keeping everything else worst-case-scenario for Tesla as before:<p>(6.99-2.83 $/gallon) / 8 MPG = 0.52¢/mile, for a total of $2.03/mile for diesel<p>$1.26 * 11.30 / 9.89 = $1.44/mile for Tesla, or 59¢/mile saved.<p>If we assume domestic-to-domestic, the ratio becomes 11.30/12.90. The worst case scenario here would be assuming equal percentage of price per mile as diesel:<p>$2.5/gallon / 8 MPG = 31.25¢/mile, or 31.25/1.51 = 20.7% of total price per mile<p>(($1.26 * 79.3) + ($1.26 * 20.7) * 11.30 / 12.90) / 100 = $1.23/mile, for a total of 80¢/mile saved.<p>... for a total of 80¢/mile saved.<p>Of course, a proper cost calculation would be much more complicated, since it would depend on the routes your transport company takes, where you are located, how big the company is, etc.<p>With so many countries I can only assume it requires more red tape to get their semi through all the required tests in Europe (even with EU-wide standards simplifying things there), but it would probably worth it for Tesla to target specific countries with higher fuel costs and lower electricity costs.<p>Tangent: in the process of looking this up, I discovered that fueleconomy.gov does not have a section for semi-trucks[5]. I wonder if that was blocked by the automotive industry on purpose.<p>[0] <a href="https://www.youtube.com/watch?v=nONx_dgr55I&t=14m" rel="nofollow">https://www.youtube.com/watch?v=nONx_dgr55I&t=14m</a><p>[1] <a href="https://hdstruckdrivinginstitute.com/semi-trucks-numbers/" rel="nofollow">https://hdstruckdrivinginstitute.com/semi-trucks-numbers/</a><p>[1] <a href="http://www.globalpetrolprices.com/diesel_prices/North-America/" rel="nofollow">http://www.globalpetrolprices.com/diesel_prices/North-Americ...</a><p>[2] <a href="http://www.globalpetrolprices.com/diesel_prices/Europe/" rel="nofollow">http://www.globalpetrolprices.com/diesel_prices/Europe/</a><p>[3] <a href="https://www.eia.gov/electricity/monthly/epm_table_grapher.php?t=epmt_5_6_a" rel="nofollow">https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...</a><p>[4] <a href="http://ec.europa.eu/eurostat/statistics-explained/index.php/Electricity_price_statistics" rel="nofollow">http://ec.europa.eu/eurostat/statistics-explained/index.php/...</a>, <a href="http://ec.europa.eu/eurostat/product?code=nrg_pc_204&language=en&mode=view" rel="nofollow">http://ec.europa.eu/eurostat/product?code=nrg_pc_204&languag...</a>, <a href="http://ec.europa.eu/eurostat/product?code=nrg_pc_205&language=en&mode=view" rel="nofollow">http://ec.europa.eu/eurostat/product?code=nrg_pc_205&languag...</a><p>[5] <a href="http://fueleconomy.gov/feg/search.shtml?words=semi" rel="nofollow">http://fueleconomy.gov/feg/search.shtml?words=semi</a> |
There’s a Digital Media Crash, But No One Will Say It | Ad-driven revenue models are just not reliable long term for media publications anymore. All these sites dying or having issues were built on the old world model created by newspapers 200 years ago. These models aren't even working for the big TV networks. Their news is now basically 20 minutes of old people new stories (stuff about Jesus, medicare, health care breakthroughs) book-ended by ads for prescription drugs and adult diapers. Their only viewers are 60+, and aging.<p>The new model is, actually, the oldest model: Sponsorships. Remember how things like the Jack Benny Show were sponsored by Jell-O and Lucky Strikes? That model actually works again. It's strange, but it's really is a bit liberating (I work for a sponsored pub). We actually have the funds and time to do extremely expansive pieces on deep topics we choose.<p>Having fewer people to keep happy means having fewer editorial bonds to advertisers. In a pub with dozens or even hundreds of advertisers, all in the market on which you're reporting, it's tough to not piss them off by reporting on bad news about them. This happens all the time. ALL THE TIME. "You ran that story about our listeria outbreak, now we're not advertising Chipotle on your site anymore!" In the video game world, bad games with ads get good reviews, or ads are pulled. Movies are the same way. Many publications cover the very stuff they advertise, so it's a tricky situation.<p>You'd think sponsorships would be the same way, but they really aren't. Instead, sponsors get to post their own content alongside the real good stuff. Paid-for-content, as it were, which isn't even always bad, it's just stuff these companies want to get out there where people will read it, rather than sitting unread on their corporate blogs.<p>I am now convinced the sponsorship model is the way out of this. It might not work as well outside of a confined vertical, however. One thing is for sure, sponsors love being able to tell their side of the story to our readers, and I feel like the readers just skip stuff they find too marketingy in favor of our really good, deep content anyway, so it's kind of a win-win.<p>The other thing I like about sponsorships is it brings the colluding onto the table instead leaving it hidden. In the past, I've worked at places where they've been adamant about separation of church and state: advertising and editorial are divided and do not talk, collude, or work together at all. You couldn't take more than a $15 lunch for free, could take no free trips or hotel rooms, and couldn't keep neat tchochkes or product samples.<p>Meanwhile, these same places would ALWAYS put their foots on your neck, subtly, to influence content. They'd even send the lead sales guy and the head of editorial out to do joint meetings which were only designed to sell ads. If someone bought a large ad and you wrote a bad story about them, it could be reworked, or even killed entirely.<p>Sponsorships, however, are known to be collusion, right? Now that I am at a sponsored publication, I can take trips, dinners, hotel rooms. It's great! I'm still making a great effort not to be compromised, but now I can do that in Spain for a week at a conference. Makes a huge difference, frankly. I'm much better at covering a show far away if I am in the show hotel instead of the cheapest place my failing pub could afford to set me up, 20 miles away on the side of the highway.<p>Journalists know how to be fair and balanced. It's kinda their whole bag. The policies publications put in place to dictate this stuff are the first to be ignored when things get thin and business goes sour. It's why some sites sell their entire skin to McGriddle: that's sales getting creative with the design team, because they can't get close to editorial. Not officially, anyway. Frankly, stuff like that is to be praised. It's innovative and likely kept some edit staff from being laid off or influenced.<p>In the advertiser model, the editorial team gets slapped around all the time when revenues sag. Once the layoffs start, editorial integrity usually goes out the window. Sadly, if it doesn't, the pub usually dies.<p>The nature of the business creates this death spiral where sites churn more and more bad content, faster and faster in favor of getting the most possible eyeballs on the most possible ads. The content becomes an after thought. The more controversial and wrong it is, the more people read it, kinda like how Howard Stern had lots of listeners who hated him for years. Steal from Reddit, add 3 lines, post.<p>I once met a guy from Engadget who said he was in the Guinness Book of World Records as the world's fastest blogger. He could do 15 stories an hour. And he bragged about this, openly, like it was a badge of honor. Given some of the content on these buzzfeed-like sites, I could do 30 crap stories in 10 minutes, but who the fuck would want to? They'd all be wrong and have kitten pictures in them to grab hits.<p>I guess this is a long way of saying this: Digital media actually made journalism shitter for a while, but maybe this culling will fix things by making outlets figure out better, more innovative business models, allowing new, better voices to come to light. It's the easiest time ever to start your own outlet. Making money, however... that's always been the hard part. |
Why does man print “gimme gimme gimme” at 00:30? | Here's the commit from 2011:<p><a href="http://git.savannah.nongnu.org/cgit/man-db.git/commit/src/man.c?id=002a6339b1fe8f83f4808022a17e1aa379756d99" rel="nofollow">http://git.savannah.nongnu.org/cgit/man-db.git/commit/src/ma...</a><p><pre><code> 1 files changed, 9 insertions, 1 deletions
diff --git a/src/man.c b/src/man.c
index 1978329..48af3c0 100644
--- a/src/man.c
+++ b/src/man.c
@@ -1154,8 +1154,16 @@ int main (int argc, char *argv[])
debug ("\nusing %s as pager\n", pager);
- if (first_arg == argc)
+ if (first_arg == argc) {
+ /* http://twitter.com/#!/marnanel/status/132280557190119424 */
+ time_t now = time (NULL);
+ struct tm *localnow = localtime (&now);
+ if (localnow &&
+ localnow->tm_hour == 0 && localnow->tm_min == 1)
+ fprintf (stderr, "gimme gimme gimme\n");
+
gripe_no_name (NULL);
+ }
section_list = get_section_list ();</code></pre> |
The Problem of Doctors’ Salaries | Until I met my MD wife, I would have agreed with most of the points in this article. Doctors in the US are certainly well compensated relative to Doctors in other countries. Also, the story of physician credentialing as a means of restricting labor supply appeals to my economic rationale.<p>However, there are some weaknesses with this argument. To start with, physician compensation makes up just 8% of total healthcare costs in the US:<p><a href="https://www.jacksonhealthcare.com/media-room/news/md-salaries-as-percent-of-costs/" rel="nofollow">https://www.jacksonhealthcare.com/media-room/news/md-salarie...</a><p>So, if they were somehow cut in half, that would net a savings of just 4%. That assumes there wouldn't be a corresponding drop in quality as a result of loss of physicians from the occupation. Keep in mind that this is an occupation whose supply is so tight that one of the primary ways being discussed to increase supply is by lowering the credentialing requirements for primary care by having Nurse Practitioners and Physician Assistants provide primary care, unsupervised by a Physician.<p>I think that's really the rub in terms of talk of lowering physician pay. The barriers to entry in the field (in the US) are so high that if physician pay were significantly reduced without a corresponding reduction in the barriers to entry, it would disincentivize our smartest, and hardest working people from becoming physicians. Which would lead to a significant decrease in the quality of healthcare in the US.<p>In the case of my wife, she is both smarter and harder working than I am. She spent her 20's and early 30's working 60+ hour weeks doing work that is very taxing mentally and emotionally. She went through four years of undergrad and four years of medical school. She exited medical school at 26 with a significant amount of student debt (though less than many, and at a lower interest rate due to smart decisions on her part) that started capitalizing immediately. She then spent four years in a residency program, making $50,000 a year, and then another four years in two different fellowship programs also making $50,000. When she finally got to a point where she started making a Physician's level of compensation, she was almost 35 years old with $240k in student loan debt and a small amount of savings and retirement built up. She could have dropped or failed out at any time and, if she had, she would have taken on the full cost of the student loan debt without the ability to repay it. This level of debt represents a significant personal risk for physicians that shouldn't be discounted. If you were to compare our financial situations independently, as a software developer, I'm in a better position financially. I've had the opportunity to invest my earnings in my 20's and 30's and haven't had to forfeit so much of my later earnings to debt. As a result of this financial head start, even though she makes more than twice what I make, she doesn't have a chance to catch up with me in terms of wealth accumulation until she's in her 40's, assuming she's financially disciplined and doesn't succumb to physician lifestyle inflation.<p>If she had instead aimed to become Nurse Practitioner or Physician Assistant, she would have been looking at a low to mid 100k job in her mid-20's with maybe 40k in debt. This would have been a much lower risk in terms of debt, it's much less work to go into one of these fields, and I would argue it may be a better move financially because of the ability to put her money to work earlier from an investment standpoint.<p>So, if the carrot of higher compensation later down the road is eliminated, I think we will see less quality people entering the field. Physicians are some of the smartest and hardest working people in the US. They'll be able to weigh the cost benefits and will choose alternative fields that have better financial renumeration or lower risk. If the student loan burden were reduced for physicians, you might be able to get away with lower pay without disincentivizing becoming a physician. But I don't see the government providing more assistance to cover the cost of medical school.<p>If we really wanted to cut down costs for healthcare in the US, the first place to look is administrative costs. We pay more than twice what other countries pay in administrative costs as well. And administrative costs make up a greater percentage of the total money spent on healthcare. Administration costs are mostly waste in that the reason it exists is to keep the gears turning so people can receive healthcare. One of the reasons administrative costs are so much higher is b/c of the complexity of the private insurance system in the US. As others have noted, some of the administrative burden would be simplified by a universal healthcare system.<p><a href="http://www.commonwealthfund.org/publications/in-the-literature/2014/sep/hospital-administrative-costs" rel="nofollow">http://www.commonwealthfund.org/publications/in-the-literatu...</a> |
Ask HN: Validating assumptions about the recruiting business | I was a technical recruiter for 8-ish years, and now I'm a developer. There are a few basic models in recruiting:<p>1. Pay contractors X/hour, and you charge X+Markup per hour. You need to be able to EVERY contractors NOW, and have enough money to wait until LATER, or you'll go BROKE<p>2. Charge a fee for a successful placement. In the USA I've most commonly seen 20%, and some agencies can get away with 25%. Ver few charge more than that.<p>3. Retainer. You get paid cash up front to fill a position, and more when the position is filled.<p>4. Hourly recruiting. You charge for a recruiter's time.<p>It sounds like you're looking to operate as business model 1 and/or 2.<p>First, the basics:<p>- If you don't have any jobs to fill, you won't make money. So if you don't already have a proven method to get clients, I assume you're planning on dialing for dollars. "DIALING +1-415-200-0000... ring... 'hey, wanna i'm a recruiter and...' CLICK... DIALING +1-415-200-0001.."<p>- If you don't know how to source candidates already, you'd better figure that out really darn fast. The entire recruiting industry revolves around trying to find some secret sauce to build of proprietary methods & databases for connecting with 'top talent'. Few recruiters are truly good at finding candidates in hard to reach places.<p>Now some thoughts on why it's hard to grow a recruiting business:<p>- You're already in tune with the fact that successful recruiters "probably don't want to give their secrets away", and that's correct, to a point. Any super successful recruiter's real secret is that they work their butt's off as hard as anyone you've ever met, because being a persistent machine of a person is really the only way to make it (that I've seen). There is absolutely, positively no shortcut to success in the recruiting business. And someone who makes big $$$ has no incentive to come work for you.<p>- A mediocre recruiter will work for you, take all your money, get fired, and go do the same for another company willing to hire a mediocre recruiter<p>- An inexperienced hire with a lot of potential will work for you, get good, and leave when they realize they can make more working independently than they can working for you. The barrier to entry for starting one's own dialing-for-dollars recruiting business is incredibly low. You only need a phone and a desk. That's all I had when I got started! And as soon as I got good enough to make it on my own, I left my boss' company & started my own!<p>- When trying to drum up new business, you will have a very hard time getting quality clients. The crappy clients will hire any recruiter that calls them, and the good clients will only take your business if you have quality candidates to present BEFORE you call them. No CTO/VP/Director/Manager wants to spend most of their time explaining their jobs to recruiters who will never show up with a quality candidate, which is what happens most of the time.<p>- Any attempt you make to automate your recruiting business is going to be extremely hard. Writing your own software is always hard. And writing software that ties into other recruiting platforms is difficult, because in my experience every recruiting solution on the planet tries to build a walled garden and/or owning the user interface through which all recruiting work is done.<p>- If you do manage to hire a successful recruiter who wants to keep working for you, you'll need to keep them happy, and that will eat up more & more of your capital. It will also be able to capture the magic that lives within your awesome recruiter to the point where you can hire other people & teach them to be awesome, too.<p>To learn a bit about running a service business, check out Managing The Professional Service Firm. [1] . It touches on a lot of the challenges related to running a business like that one you want to create.<p>My advice to you is... if you really want to run a recruiting business, first learn how to be a recruiter yourself. Without that knowledge, I fear any time & effort you spend on trying to build a business that will depend on people you don't know to do work you don't understand will be a hole into which you pour all of your money. I'll tell you what. Send me all of your money, and I'll send you 50% of it right back. That way you can say you hired a recruiter, failed fast, and cut your losses while you still had some cash to speak about :)<p>Good luck!<p>[1] <a href="https://www.amazon.com/Managing-Professional-Service-David-Maister/dp/0684834316" rel="nofollow">https://www.amazon.com/Managing-Professional-Service-David-M...</a>. |
Ask HN: Would there be a way to limit a website to geographically local traffic? | I was under the weather yesterday, so barely spent any time online. I am surprised to see replies here.<p>The use case:<p>TLDR:<p>I am stopping the San Diego Homeless Survival Guide. I am definitely going to be doing something else intended to be generally useful for homeless Americans. I am debating also separately curating a list of local homeless resources, but I have reservations about putting it out there for the whole world to see as I fear that may serve as an attractive nuisance. Due to the intended target audience, it seems to me that sign ups and a network likely won't work. So I wonder if there are ways to mostly limit it to locals from my end, even if the walls are leaky and imperfect.<p>Longer:<p>While homeless, I started a blog to keep track of information for me that I needed because the state of the art was paper handouts, often filled with inaccurate information.<p>I went with a blog because: Papers make me sick. If you get rained on, they stop being readable. And when you are homeless, you have to carry everything you own all the time everywhere, so every piece of paper was one more thing to carry.<p>Most websites for homeless services are donor facing. In other words, the main thrust of the website is to tell you "We do good work. Please give us money, donate goods and/or volunteer." So even if the agency has a website, it is incredibly challenging for an actual homeless person to go online and find out what services are available locally. Thus, my site attracted organic traffic even at a time when I had abandoned it and there were zero updates and zero attempts to promote the site because it was client facing: it listed information useful to actual homeless people looking for resources.<p>I am not going to be updating the site further. I am leaving it up for now, but I don't want to promote it further.<p>The longer the site went on, the more I tried to write generally useful information rather than location specific information. However, the name of the location is in the name of the site. This causes two problems: 1) People outside of the area see it as not relevant to their needs and 2) alternately, homeless people take the existence of the site as a suggestion to move there.<p>I am again in an area with a fairly large homeless population. Locals here have voiced the opinion that the homeless are attracted to the area because there are services here. That may be a factor, but I think a more likely root cause is that it has relatively temperate weather for the state of Washington. January average lows are around 34 degrees. In Spokane and other parts of the state, average January lows are in the low to mid twenties.<p>Most homeless services, like soup kitchens, are useful for helping people survive a crisis. But they tend to do a poor job of helping people solve their problems in order to establish a stable middle class life. To be fair, the kinds of problems that lead to homelessness are not easily resolved. But this situation means that accessing homeless services comes with a danger that you just get good at being a freeloader. This can actively undermine the pursuit of self sufficiency.<p>When I was going to homeless services, it exposed me to germs and cigarette smoke and other health hazards, which undermined my efforts to get well enough to work and support myself. Most "practical" conversation amongst the homeless was about things like where to get another free meal or other charity, not about things like how to make money while homeless.<p>It can take two hours of standing in line to get a free meal. If you do that three times a day, you have spent six hours just on staying fed. That is six hours you can't job hunt or otherwise try to solve your problems.<p>The more talented you get at accessing homeless services, the more stuck you can become, in part because of how they are structured. They figure your time as a poor person isn't worth anything, so you should be okay with taking two hours to get a meal. Worse, many of them assume you are an alcoholic or addict, so they may subconsciously figure that the more of your time they waste, the less time you will spend getting high.<p>While homeless, I figured out what kinds of things were genuinely helpful to me, things that helped me survive in the here and now while also helping me move towards my goals of self sufficiency rather than helping to keep me stuck. I want to keep writing about those things.<p>But I have very mixed feelings about once again curating a list of local services. On the one hand, I think there is need for this information. On the other hand, I worry that broadcasting it worldwide may do more harm than good.<p>Since all locations in the US seem to do an equally poor job of putting out client facing websites for serving the poor, it seems to me that simply creating a site with good client facing information potentially creates an attractive nuisance. If you are homeless and trying to find what you need locally is a constant uphill battle, but you can go online and find a curated list of resources in some other city, even if it is quite far away, it may be vastly easier to just hop a bus or hitchhike to that city than to keep knocking on doors and getting paper handouts locally and trying to painstakingly piece together information and solutions.<p>So, I am wondering if making it hard to find a list of resources if you are outside the region would be a means to get information into the hands of locals without potentially attracting freeloaders from far flung corners of the US. |
The Uncertain Future of Bitcoin Futures | It's been fun watching Bitcoin as a whole and especially watching financial folks try to wrap their brains around Bitcoin, its price and seeing them try to define exactly "what it is". Even my in-laws are investing a small amount in Bitcoin and Litecoin ... the two of them have a Verizon Wireless internet connection and a computer from 2004 -- read: not technically savvy in the least.<p>As someone who's done about 20 years worth of investing in stocks and mutual funds and has a very low tolerance for gambling[0], I wouldn't <i>dream</i> of investing in Bitcoin. Last year, however, I <i>did</i> mine Ethereum. The result of that mining turned out quite well for me, costing a little in electricity[1]. I further rationalized it by the fact that I could put the video cards I purchased to work for me in other ways -- I might not have purchased the card that I did were I not targeting price/watt in mining, but the cards I purchased were slightly better than the ones I would have and I'm very happy with them for rendering and some of the fun I'm having playing with ML using CUDA, and hey my ffmpeg transcodes are quite fast, too! I jumped on ETH mainly because of the fact that I had played with Bitcoin in the early days and managed to mine about 20 of them which I lost due to reloading a server[2] and not caring about my USD$0.25 worth of BTC[3].<p>All of that aside, I don't mine any longer -- the difficulty is too high and with my setup, the best I can hope for is to get a couple of Ether a year. I started mining ETH because they were doing something different from BTC and I had the sense that the price would go up (though, again, didn't expect it to hit $475 as it has today). The thing of it is -- nobody can answer the question "why" in a way that makes me comfortable. It's, effectively, a bet that someone elses' imagination will feel that the price of these bits are worth more than they are now. While I've gotten the sense that a lot of investing works this way, at least from time to time it <i>appears</i> to track with real-world things, like earnings and the state of the overall economic sector that the company is a part of.<p>I think at least some of the price movement has to do with the fact that Bitcoin is wildly unregulated (and I believe when the inevitable hammer of government steps in, that price won't soar or remain as high). The fact that my in-laws have money in the game (though, for practical purposes, it's pocket change) tells me the days of free-wheeling non-regulation are very numbered, though I have a hard time figuring out <i>how</i> you even regulate something like this (particularly a product like ZCash). And the worst of it is that from a regulatory standpoint, it'd be easy to argue that a lot of the transactions taking place look a whole lot more like money laundering and a whole lot less like "a digital equivalent to cash[4]". The fact is, the treasury components of our governments <i>love</i> that we hate carrying around pieces of easily lost paper and prefer to use regulated intermediaries that helpfully provide reports to them about our incomes and spending habits which can be used to ensure we're operating according to the law. The numbers of traditionally "Cash Only" businesses are approaching zero which is starting to put <i>any</i> transaction that doesn't involve one of these regulated entities into the "questionable" category.<p>I'm fascinated by all of this, personally. It's mostly played out as I have expected -- when the value of BTC started exploding, the numbers of cryptocurrency and cryptocurrency related products started exploding as did the number of thefts, viruses/criminal elements specifically attacking or using cryptocurrencies. And, of course, the scammy-sounding me-too start-ups trying to find some way to use buzzwords from the cryptocurrency space (though the "ICO"s surprised me quite a bit both in the sheer number of them and the number of people willing to buy into them). Most of the off-shoot cryptocurrencies were nearly identical to one of the leading products with ZEC and ETH being outliers with unique features and spawning their own copy-cats (I'm also watching PIRL with curiosity[5] since it seems centered on improving transaction speeds, more stability around management/updates and focused early on getting an actual marketplace up -- plus, it's based on ETH, which I find very interesting as a technology).<p>[0] I brought $20 to Las Vegas with the full intent of losing it. It took four days for that to happen and I still felt badly about having wasted it despite it providing me with several hours of entertainment across a variety of video poker machines.<p>[1] As a guy who has three servers running in his basement and likes to keep the air conditioning set at a frigid 68 degrees, the difference between my already outrageous electric bill and my slightly more outrageous electric bill wasn't something my budget noticed. The servers were mostly idle, anyway, so why not have them doing something while they're not doing anything?<p>[2] I found the technology interesting and mined more as an academic experience. I don't remember the exact date, but it was within a few months of when the network went live ... I didn't even use a pool and managed to net somewhere around 20-25 of them in a couple of months. I loved the "idea" but wrote it all off as a cypherpunk's wet dream, not something that would actually have value in the future. Oops!<p>[3] IIRC, there wasn't even a way to see the price of BTC in USD because the only way to turn them into cash was to find someone who wanted to buy them from you.<p>[4] As in, similar anonymity and similar risk -- you lose or have your wallet stolen, your cash is also lost. Unfortunately, people won't see it that way when it's <i>their</i> money.<p>[5] I even ended up firing up my mining rig, though that was mostly because I was feeling the itch to delve into CUDA, again, so I took some time to add some optimizations to the mining code I had tweaked last year. I don't expect to make money from it, just I like didn't expect to do so for ETH, but I'm enjoying playing around with code and technologies that I don't normally get exposure to in my day job. |
Australia's Housing Frenzy | So I'm from Perth. Well, technically I largely grew up in mining towns but basically I'm from Perth. I've now lived in NYC for 7 years. I've also lived in the London, Zurich and Germany (Cologne).<p>Perth had a lot going for it 20 years ago. The standard of living was incredibly affordable. The weather was nice. It was a dull place but well-suited to outdoor activities and raising a family.<p>In the early 2000s the resources boom happened, fueled by China's insatiable appetite. In the space of 5 years, a 1970s brick home <5km from the CBD went from <A$100k to A$400k. One of my biggest regrets was walking away from buying a house in 2002 that overlooked the ocean for A$430k that 4 years later would've been worth probably A$1.5-1.8m.<p>Now at the time the same headlines reigned. Prices are going to crash. But in hindsight there were four important factors:<p>1. Housing was previously too cheap and there was a correction;<p>2. The resources boom created a massive backlog of construction projects that choked supply for building in the housing market meaning building a house took longer and was more expensive. In the 90s you'd have TV ads for house and land packages for A$100k that would take 3-6 months to build. In a short space of time building a house took 12-18 months and cost $300k+ plus land.<p>3. The resource boom brought an influx of skilled migrants that increased demand.<p>4. Those in mining and construction got paid a whole lot more which means they could spend a whole lot more and there's only so much inventory of desirable property (near the city or on the coast or river). Trading up to those properties created a windfall from current owners which they then spent and so on.<p>So the GFC came in 2007. Australia didn't have the subprime problem that the US did. More importantly though, in 2008 China stopped buying as many resources. Again the predictions of doom came. Momentum probably drove the market still up (in parts anyway) for another 2-3 years. Since then it's either gone down (<10% mostly) or stagnated. In the last few years this has caught up with rents and properties that once might have 50 applicants for $500/week now couldn't be filled at $350/week.<p>This seems to be the Australian norm for property cycles. It happened in Sydney in the 1970s. Sydney became really expensive but probably stagnated until the mid 1990s at least.<p>While this was all going on in Perth and Brisbane (Western Australia and Queensland are the two big resource states), Sydney and Melbourne were going nowhere.<p>What changed around 2010-2011 is the same thing that happened in many other places around the world: money came into real estate in the large cities. Particularly foreign money. Particularly Chinese money.<p>Some will argue low interest rates were driving this but they're wrong. Properties above $3-5m in NYC for example aren't bought by people with a mortgage. They're bought with cash.<p>It seems like certain people in China built up a large amount of wealth in the 2000s and the government placed restrictions on how that capital could leave. I believe that certain investments including real estate were one such exception. Wealthy Chinese wanted to get money beyond Beijing's control. They also wanted to have an "out" by buying residency/citizenship in other countries.<p>So in the last 7 years median prices in Sydney went from (IIRC) ~A$680k to A$1.1m. Bear in mind that this is even with large swathes of suburban wastelands in Sydney's West ostensibly bringing the median down. The effect on the harbour, the ocean and in Sydney's inner suburbs and North Shore is even more pronounced.<p>All the while software engineers might still be getting paid the same A$150k + bonus they could get 10-15 years ago.<p>High property prices really are a disease. It makes rent more expensive. It makes everything you buy more expensive since something has to cover the cost of the premises those goods and services come from.<p>I believe Switzerland has a far better policy approach to this than many other countries I've seen. The intent seems to be that it is an undesirable outcome to have foreign money come in and buy up the country, basically. Likewise, they don't want a rampant speculative market so short term capital gains on property are taxed punitively. It may have changed but when I was there this meant 100% of gains if sold within 2 years and this eventually scaled down over the next 8 years.<p>People need somewhere to live. Even if they don't buy, prices drive rent. Only luxury buildings get built in NYC now because it doesn't cost much more than "affordable" buildings but it's a whole lot more profitable. That needs to change. The world's cities can't just be used for money laundering and a holding asset for the ultra-wealthy across the world.<p>So as for Sydney and Melbourne I don't think the prices are going to come crashing down. They may stagnate for years and even dip. But the Australian banking industry is pretty well-regulated (by comparison to the US anyway) and for those at the high end of the market, they've paid cash anyway so who cares what they do or lose? What I mean is dropping prices won't force them to sell. |
My unusual hobby | Terrific!<p>I took the liberty of translating your module to TLA+, my formal tool of choice (recently, I've been learning Lean, which is similar to Coq, but I find it much harder than TLA+). I tried to stick to your naming and style (which deviate somewhat from TLA+'s idiomatic styling), and, as you can see, the result is extremely similar, but I guess that when I try writing the proofs, some changes to the definitions would need to be made.<p>One major difference is that proofs in TLA+ are completely declarative. You just list the target and the required axioms, lemmas and definitions that require expanding. Usually, the proof requires breaking the theorem apart to multiple intermediary steps, but in the case of the proof you listed, TLA+ is able to find the proof completely automatically:<p><pre><code> LEMMA supremumUniqueness ≜
∀ P ∈ SUBSET T : ∀ x1, x2 ∈ P : supremum(P, x1) ∧ supremum(P, x2) ⇒ x1 = x2
PROOF BY antisym DEF supremum (* we tell the prover to use the antisym axiom and expand the definition of supremum *)
</code></pre>
The natDiff lemma doesn't even need to be stated, as it's automatically deduced from the built-in axioms/theorems:<p><pre><code> LEMMA natDiff ≜ ∀ n1, n2 ∈ Nat : ∃ n3 ∈ Nat : n1 = n2 + n3 ∨ n2 = n1 + n3
PROOF OBVIOUS (* this is automatically verified using just built-in axioms/theorems *)
</code></pre>
Another difference is that TLA+ is untyped (which makes the notation more similar to ordinary math), but, as you can see, this doesn't make much of a difference. The only things that are different from ordinary math notation is that function application uses square brackets (parentheses are used for operator substitution; operators are different from functions, but that's a subtelty; you can think of operators as polymorphic functions or as macros), set comprehension uses a colon instead of a vertical bar, a colon is also used in lieu of parentheses after quantifiers, and aligned lists of connectives (conjunctions and disjunctions) are read as if there were parentheses surrounding each aligned clause. Also `SUBSET T` means the powerset of T.<p>Here's the module (without proofs, except for the one above):<p><pre><code> ------------------------------- MODULE Kleene -------------------------------
EXTENDS Naturals
(*
Assumption: Let (T, leq) be a partially ordered set, or poset. A poset is
a set with a binary relation which is reflexive, transitive, and
antisymmetric.
*)
CONSTANT T
CONSTANT _ ≼ _
AXIOM refl ≜ ∀ x ∈ T : x ≼ x
AXIOM trans ≜ ∀ x, y, z ∈ T : x ≼ y ∧ y ≼ z ⇒ x ≼ z
AXIOM antisym ≜ ∀ x, y ∈ T : x ≼ y ∧ y ≼ x ⇒ x = y
(*
A supremum of a subset of T is a least element of T which is greater than
or equal to every element in the subset. This is also called a join or least
upper bound.
*)
supremum(P, x1) ≜ ∧ x1 ∈ P
∧ ∀ x2 ∈ P : x2 ≼ x1
∧ ∀ x3 ∈ P : ∀ x2 ∈ P : x2 ≼ x3 ⇒ x1 ≼ x3
(*
A directed subset of T is a non-empty subset of T such that any two elements
in the subset have an upper bound in the subset.
*)
directed(P) ≜ ∧ P ≠ {}
∧ ∀ x1, x2 ∈ P : ∃ x3 ∈ P : x1 ≼ x3 ∧ x2 ≼ x3
(*
Assumption: Let the partial order be directed-complete. That means every
directed subset has a supremum.
*)
AXIOM directedComplete ≜ ∀ P ∈ SUBSET T : directed(P) ⇒ ∃ x : supremum(P, x)
(*
Assumption: Let T have a least element called bottom. This makes our partial
order a pointed directed-complete partial order.
*)
CONSTANT bottom
AXIOM bottomLeast ≜ bottom ∈ T ∧ ∀ x ∈ T : bottom ≼ x
(*
A monotone function is one which preserves order. We only need to consider
functions for which the domain and codomain are identical and have the same
order relation, but this need not be the case for monotone functions in
general.
*)
monotone(f) ≜ ∀ x1, x2 ∈ DOMAIN f : x1 ≼ x2 ⇒ f[x1] ≼ f[x2]
(*
A function is Scott-continuous if it preserves suprema of directed subsets.
We only need to consider functions for which the domain and codomain are
identical and have the same order relation, but this need not be the case for
continuous functions in general.
*)
Range(f) ≜ { f[x] : x ∈ DOMAIN f }
continuous(f) ≜
∀ P ∈ SUBSET T: ∀ x1 ∈ P :
directed(P) ∧ supremum(P, x1) ⇒ supremum(Range(f), f[x1])
(* This function performs iterated application of a function to bottom. *)
RECURSIVE approx(_, _)
approx(f, n) ≜ IF n = 0 THEN bottom ELSE f[approx(f, n-1)]
(* We will need this simple lemma about pairs of natural numbers. *)
LEMMA natDiff ≜ ∀ n1, n2 ∈ Nat : ∃ n3 ∈ Nat : n1 = n2 + n3 ∨ n2 = n1 + n3
(* The supremum of a subset of T, if it exists, is unique. *)
LEMMA supremumUniqueness ≜
∀ P ∈ SUBSET T : ∀ x1, x2 ∈ P : supremum(P, x1) ∧ supremum(P, x2) ⇒ x1 = x2
PROOF BY antisym DEF supremum
(* Scott-continuity implies monotonicity. *)
LEMMA continuousImpliesMonotone ≜ ∀ f : continuous(f) ⇒ monotone(f)
(*
Iterated applications of a monotone function f to bottom form an ω-chain,
which means they are a totally ordered subset of T. This ω-chain is called
the ascending Kleene chain of f.
*)
LEMMA omegaChain ≜
∀ f : ∀ n, m ∈ Nat :
monotone(f) ⇒
approx(f, n) ≼ approx(f, m) ∨ approx(f, m) ≼ approx(f, n)
(* The ascending Kleene chain of f is directed. *)
LEMMA kleeneChainDirected ≜
∀ f : monotone(f) ⇒ directed({ approx(f, n) : n ∈ Nat })
(*
The Kleene fixed-point theorem states that the least fixed-point of a Scott-
continuous function over a pointed directed-complete partial order exists and
is the supremum of the ascending Kleene chain.
*)
THEOREM kleene ≜
∀ f : continuous(f) ⇒
∃ x1 ∈ T :
∧ supremum({ approx(f, n) : n ∈ Nat }, x1)
∧ f[x1] = x1
∧ ∀ x2 : f[x2] = x2 ⇒ x1 ≼ x2
=============================================================================</code></pre> |
The Best Monitor for Programming: A Cheap 40" 4K TV | To each his own, I guess. In my case, I find advantages to both <i>so much</i> that I routinely switch between them.<p>My main workstation is connected to a 55" Vizio HDTV (a model that supports Chroma 4:4:4 and represents text of various colors very well) that I picked up last year during a post-Thanksgiving sale. While I love it, I do the lionshare of my development using a 1080p display connected to a laptop. Is the 4K screen superior? Absolutely -- in almost every way -- except for portability.<p>I made the switch from multiple 1080p monitors to a single laptop monitor a few years ago and ditched the multiple monitors for the single 4K (which had a total resolution about equal to my multi-monitor arrangement). The transition was tricky at first, though, and coupled with a visual studio extension that aided in navigation/visual identification of important code points (something I wrote for myself), I found <i>downgrading</i> to the 1080p single display to take far less time to get used to than I expected.<p>The one drawback turned out to be a positive thing in the long run: by not being able to see so much code on-screen at all times, it forced me to write code more carefully. Where I could have gotten away with not splitting a class up into smaller pieces, before, because I could see the whole thing on-screen, I couldn't do that any longer and continue to be productive writing code. Sticking with small methods that "do one thing" and small classes that have a single purpose is a best practice and it's one I was pretty religious about <i>before</i>, however, there are times when it's necessary to "get the code out the door" and corners get cut, mostly out of a false-sense that writing it dirty will be faster[0].<p>There's one place where the 4K screen is <i>vastly</i> superior, though -- debugging. You can get your code, the locals/watch and the application visible in one screen. This, too, turned out to be a double-edged sword. It was easy to skip seemingly unimportant unit tests when I knew my debugging environment was pristine. But the reality (for me at least) is that the <i>moment</i> I have to hop into the debugger and check locals/watch to see what's going on, I can expect to waste an hour fixing a problem 80% of the time. Here's the thing, though, writing tests to cover my code rarely takes more than an hour or two but the simple act of doing so causes me to <i>rethink the problem</i> from a testing standpoint. I'll often find the bug before I've ever run the test to validate that it passes -- and about 1/3 of the time, that bug was <i>subtle</i>. Less often, but still frequent, I'll discover during the act of writing the test that the code could be refactored in a more logical manner, which results in better code. These facts mean I write tests <i>regardless</i> of having a great debugging environment, and I'm obsessive about it -- covering code that seems obvious[1] -- so I end up rarely needing the benefits that a 4K screen offers.<p>Being handicapped by the smaller screen on my laptop does come with advantages beyond just those introduced by the constrained environment. I'm not tethered to a room or location to code. Sometimes walking away or changing my physical environment ends up allowing me to avoid an actual "break" -- just moving from the office to the kitchen (when I worked at home) or from my desk to the office-kitchen couch (now that I'm an office dweller) causes enough of a shift to get me thinking correctly again.<p>All of that aside, one area where the 4K screen shines is anything to do with multimedia, 3D and image editing. Being able to do detail work, zoomed in on a screen that accurately reproduces color -- and at a cost that's a fraction of 4K monitors -- is simply fantastic. I also find the brightness of the 4K TV I purchased to be excellent -- the IPS monitors that I own tend to default to a brightness setting that is almost excessive (yes, adjustable, I know). I purchased a TV that was evenly backlit and once I got everything configured properly, I don't mess with it. The size and amount of light it outputs is pleasant; it's large enough that I can use it with the lights turned off in my office (if I have a headache) and it keeps the room reasonably lit while not blasting so much light out of the display as to be obnoxious. Despite not being IPS, being large and me sitting a couple of feet from it, I didn't notice any issues with edges looking dimmer due to the angle -- though I mostly ignore the edges because I rarely have <i>anything</i> maximized on that screen. It's just not necessary when you have all of that realestate -- you tend to simply move less important things off to an edge and put what you're working on in the middle in a smallish window.<p>[0] And it sometimes is in the short-run, but if that code has any importance to the program, you pay for it.<p>[1] I'm not "Test Driven" -- I write tests "second" because I started writing code quite a bit before "Unit Testing" was a common practice and I find the act of doing "Tests First" to feel backward. Writing tests afterward seems to have the same benefits as doing it the other way around, though it sometimes drives code changes. I've been doing it this way for too long to switch and be as productive as I am. But I cover <i>everything</i> -- I once wrote tests to cover a configuration lookup class that was basically a bunch of retrievals from a dictionary. A coworker chided me for it until I informed him that I found four bugs (and the best part is that said coworker was the one who wrote the buggy code). Because of the way the configuration routine worked -- cascading from a set of values in the database with fallbacks to values in a file and then finally constant values, all of which would <i>always</i> be set correctly from top to bottom in our development environment, the issues I unearthed would have <i>easily</i> gone unnoticed until a customer discovered them. |
macOS High Sierra: Anyone can login as “root” with empty password | Pyramid's OSx version of Unix (a dual-universe Unix supporting both 4.xBSD and System V) [1] had a bug in the "passwd" program, such that if somebody edited /etc/passwd with a text editor and introduced a blank line (say at the end of the file, or anywhere), the next person who changed their password with the setuid root passwd program would cause the blank line to be replaced by "::0:0:::" (empty user name, empty password, uid 0, gid 0), which then let you get a root shell with 'su ""', and log in as root by pressing the return key to the Login: prompt. (Well it wasn't quite that simple. The email explains.)<p><a href="https://en.wikipedia.org/wiki/Pyramid_Technology" rel="nofollow">https://en.wikipedia.org/wiki/Pyramid_Technology</a><p>Here's the email in which I reported it to the staff mailing list.<p><pre><code> Date: Tue, 30 Sep 86 03:53:12 EDT
From: Don Hopkins <[email protected]>
Message-Id: <[email protected]>
To: [email protected], [email protected],
Pete "Gymble Roulette" Cottrell <[email protected]>
In-Reply-To: Chris Torek's message of Mon, 29 Sep 86 22:57:57 EDT
Subject: stranger and stranger and stranger and stranger and stranger
Date: Mon, 29 Sep 86 22:57:57 EDT
From: Chris Torek <[email protected]>
Gymble has been `upgraded'.
Pyramid's new login program requires that every account have a
password.
The remote login system works by having special, password-less
accounts.
Fun.
Pyramid's has obviously put a WHOLE lot of thought into their nifty
security measures in the new release.
Is it only half installed, or what? I can't find much in the way of
sources. /usr/src (on the ucb side of the universe at lease) is quite
sparse.
On gymble, if there is a stray newline at the end of /etc/passwd, the
next time passwd is run, a nasty little "::0:0:::" entry gets added on
that line! [Ye Olde Standard Unix "passwd" Bug That MUST Have Been Put
There On Purpose.] So I tacked a newline onto the end with vipw to see
how much fun I could have with this....
One effect is that I got a root shell by typing:
% su ""
But that's not nearly as bad as the effect of typing:
% rlogin gymble -l ""
All I typed after that was <cr>:
you don't hasword: New passhoose one new
word: <cr>
se a lonNew passger password.
word: <cr>
se a lonNew password:ger password.
<cr>
Please use a longer password.
Password: <cr>
Retype new password: <cr>
Connection closed
Yes, it was quite garbled for me, too: you're not seeing things, or on
ttyh4. I tried it several times, and it was still garbled. But I'm not
EVEN going to complain about it being garbled, though, for three
reasons: 1) It's the effect of a brand new Pyramid "feature", and
being used to their software releases, it seems only trivial cosmetic,
comparitivly. 2) I want to be able to get to sleep tonight, so I'm
just going to pretend it didn't happen. 3) There are PLEANTY of things
to complain about that are much much much worse. [My guess, though,
would be that something is writing to /dev/tty one way, and something
else isn't.] Except for this sentence, I will also completely ignore
the fact that it closed the connection after setting the password, in
a generous fit of compassion for overworked programmers with
ridiculous deadlines.
So then there was an entry in /etc/passwd where the ::0:0::: had been:
:7h37OHz9Ww/oY:0:0:::
i.e., it let me insist upon a password it thought was too short by
repeating it. (A somewhat undocumented feature of the passwd program.)
("That's not a bug, it's a feature!")
Then instead of recognizing an empty string as meaning no password,
and clearing out the field like it should, it encrypted the null
string and stuck it there. PRETTY CHEEZY, PYRAMID!!!! That means
grepping for entries in /etc/passwd that have null strings in the
password field will NOT necessarily find all accounts with no
password.
So just because I was enjoying myself so much, I once again did:
% rlogin gymble -l ""
Password: <cr>
[ message of the day et all ]
#
Wham, bam, thank you man! Instead of letting me in without prompting
for a password [like it should, according to everyone but pyramid], or
not allowing a null password and insisting I change it [like it
shouldn't, according to everyone but pyramid], it asked for a
password. I hit return, and sure enough the encrypted null string
matched what was in the passwd entry. It was quite difficult to resist
the temptation of deleting everyone's files and trashing the root
partition.
-Don
P.S.: First one to forward this to Pyramid is a turd.
</code></pre>
P.P.S.: The origin story of Pete's "Gymble Roulette" nick-name is here: <a href="http://art.net/~hopkins/Don/text/gymble-roulette.html" rel="nofollow">http://art.net/~hopkins/Don/text/gymble-roulette.html</a> The postscript comment was an oblique reference to the fact that I'd previously gotten in trouble for forwarding Pete's hilarious "Gymble Roulette" email to a mailing list and somehow it found its was back to Pyramid. In my defense, he did say "Tell your friends and loved ones.") |
Is Filecoin a $257 million Ponzi scheme? [pdf] | Hey HN, this is jbenet -- an author of Filecoin.<p>We think it's great that people ask hard questions, and get involved. It's great to see others studying our work and we really appreciate the open discourse. There are a few things from this article I’d like to address. (Despite the length of this post...) these are quick comments, and not a proper in-depth response.<p>- (a) The article gets some things right and some things wrong -- there is good summarizing of several of our projects, and discussion of many difficult aspects in these projects. The article discusses many technological aspects in good depth, and highlights difficulties in building these systems, aligning incentives, and the trials of past projects. The article also has significant inaccuracies. For example, the sale figure -- which appears in the title and impacts the analysis -- is incorrect. We raised $205M -- officially here: <a href="https://protocol.ai/blog/filecoin-sale-completed/" rel="nofollow">https://protocol.ai/blog/filecoin-sale-completed/</a><p>- (b) The authors chose a provocative title. As some commenters have already pointed out, the conclusion is “[we] believe that it is not one.” Despite Betteridge’s law ( <a href="https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines" rel="nofollow">https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines</a> ), many people who only read the headline will come to the opposite conclusion, and now we (not they) will have the burden of correcting those misunderstandings. Provocative titles, though they may drive imagination and clicks, can do a huge disservice to everyone in the space, and contribute to misinformation. Most people will only read the title, maybe the abstract, and use that to form and drive opinions. We choose titles of our research with diligence and care, and hope others do the same.<p>- (c) The article has a great technical overview of the Filecoin stack, and how it fits with IPFS and libp2p. This is a large structure with many pieces, and it is rare to see articles grasping how all the pieces fit together so well, and then explaining it cogently. In particular, it’s great to see this article diving deep and discussing advantages and disadvantages of low level technical structures (multihash, ipld, libp2p, and more). We modularized everything in the hope to generally improve peer-to-peer systems, and improve reusability. We hope these components will be useful to the author’s Tribler project (a network similar in goals to Filecoin), and we hope that we can also learn from and leverage solutions they have made.<p>- (d) many of the objectionable things described in this article are common in ICOs in general. Put another way, consider those claims also in terms of other significant token sales, such as Ethereum, Tezos, Polkadot, Blockstack, Cosmos, Golem. People were saying similar things about Ethereum when they did their sale in 2014. Perhaps worth doing a survey / analysis over all of them, comparing and contrasting the different things groups have done, how the ecosystem has improved, and suggest new directions.<p>- (e) It’s worth mentioning that the analysis gives a definition of a ponzi scheme, but their discussion does not map to that definition. Instead, the discussion centers on claims about future investor sentiment or speculation as the driver of value in the token, which is not the only way to establish value in token networks, and ignores the value of the services provided. That kind of analysis does not work for projects like Ethereum and other live and functioning crypto tokens. If the network is <i></i>useful<i></i>, and there is a way to <i></i>generate<i></i> or <i></i>introduce<i></i> value, by providing new or better services, and if the network can capture that value in the token itself (important step), then the tokens can hold value, based on the utility of the network as a service and not just or primarily speculation. Networks like Ethereum, Bitcoin, Zcash, and Filecoin aim to provide useful services, and much of the value stored in their tokens will be thanks to the utility of the networks.<p>Perhaps it’s worth pointing out that most crypto token projects are compared to ponzi schemes at some point :(<p><pre><code> - http://www.google.com/search?q=is+bitcoin+a+ponzi+scheme
- http://www.google.com/search?q=is+ethereum+a+ponzi+scheme
- http://www.google.com/search?q=is+zcash+a+ponzi+scheme
- http://www.google.com/search?q=is+tezos+a+ponzi+scheme
</code></pre>
- (f) The article discusses the SAFT and assurances to investors, but does not discuss them in contexts of other token sales and ICOs. Most ICOs are structured as <i></i>donations<i></i> (not investments) to a project, with little to no legal recourse -- even though many people refer to these “donations” as being “investments”. In our case, we raised investment through an instrument (the SAFT) that is a direct liability to us, and gives investors greater guarantees on the completion of the project, or consequences otherwise. If we fail to deliver the network, we must return the proceeds of the token sale. Few token sales ever have such a clause. Our structure gives investors greater accountability, not less. The article discusses this in sec IV, but does not take into account that startups are similarly risky (i.e. that startup dissolution events return only remaining capital from the efforts), and does not mention how our structure improves on the ICO landscape in general.<p>- (g) We do share the legitimate concern that ICOs need stronger accountability, and some are structured in a way that leads to abuse. The community as a whole needs to raise the bar on accountability and ethical behavior. We have taken significant steps in this direction, not just in our sale but to improve the ecosystem -- the SAFT project, which was a gargantuan undertaking that many other networks are now using, is one example. Many other networks are introducing and improving structures. We believe token networks present a very important new way to form capital, with promising advantages to users, investors, and creators, but the space is still in its infancy, and significant changes are still ahead. Token sales have improved dramatically in the last three years, and we hope they continue to improve to find the right balance and protection of the interests of all parties involved with the network.<p>Thanks,
Juan |
“I Don't Like Your Examples” (2000) | I agree with cwyers that the example are bad. Databases (relational ones, anyway) are good at describing relationships between clearly defined entities. The reason that people use Employees and Managers or Sales and Month in books is because they are clearly defined relationships and entities at the most basic level.<p>Those kinds of super-dumbed-down examples serve a purpose: to provide a prototypical way of thinking about the technology and how it's best used.<p>I mostly agree with the author about his politics, as far as I can tell from what he wrote. But I think his agenda got ahead of him. I had really hard time going to a small religious school when I was a kid because a daily dose of "Jesus says . . ." seemed to me to be completely irrelevant to a math class. It was, for lack of a better term, an unnecessary context switch. WTF am I supposed to be thinking about right now? What I think or what I believe?<p>But examples of a war criminal database just don't fit when you're trying to teach someone to (hopefully) do more than just copy and paste some code. The point of these examples is to provide a mental template for how to think about relationships between entities and what entities are.<p>If, on the other hand, the book were supposed to be more advanced and dealt with topics like, "How to navigate hot-button topics in the workplace" or if the example really wanted to model how one might design a database that served the purpose of categorizing war criminals, I would be okay with it. As it is, I just think it's a garbage example.<p>I do, however, think the author has a point about how much we gloss over sublimated political and social speech. It's true. We do that. And we often accidentally espouse the status quo by trying to be neutral. That is a legitimate problem not only in technology but also in journalism and in every endeavor that involves written language.<p>From that point of view, I applaud the author for trying something different. And I can certainly understand the need to try something different when you've been doing basically the same thing for 10 years.<p>That's my general response to this. My specific response to it is that teachers--in whatever format: book, in-person, classroom--have an obligation to only teach. Never to preach. People can go to churches for that if they want. But if I'm teaching you, my mandate is to lead you. To point you in a direction that will enable you to increase your knowledge. I struggle with this quite a lot as a violin teacher of young children.<p>Classical music and its history and theory are full of politically and morally charged ideas. It's not all about how to get the fingers of your left hand to fall into a certain configuration in a certain time constraint. Or about how to put your bow in exactly the right place with regard to a vibrating string.<p>You have to manage your relationship with your audience. You have to understand why this music came to be, and yes, how things like religion were a factor. It's not simple.<p>But my job as a teacher isn't to tell my students that Mozart was a womanizing asshole, or to pass judgement on Tchaikovsky for being a closet homosexual who was very kind to his wife in spite of being very frustrated.<p>My job is to lead the student. To show them where facts can be discovered and ultimately to enable them to process those facts and draw their own conclusions. To give the student a framework for how to process facts, analyze them, and draw their own conclusions.<p>What I think about this author's approach is tied to my personal opinion of what we should be doing when we are in leadership positions. Saying that Kissinger is a war criminal doesn't inspire the kind of learning I think is appropriate. It's stated as a fact. It's the copy-paste mindset that is so detrimental to practically everyone. Memorize a fact. Repeat it. You need to learn how to do a thing on the web. Use jQuery. Repeat it.<p>I find this didactic method deeply and morally repellent, even though I agree with the politics espoused.<p>When we take up the mantle of a teacher or mentor or leader, we have an obligation to lead people down a path of growth. Not browbeat them with ideology. This approach smacks to me of someone who doesn't understand what teaching really means.<p>Finally, I would bet money that the author has a different attitude about things today, and that we shouldn't sit around crucifying someone for an experiment they did 17 years ago. It was mostly harmless. We all make mistakes when we are writing and teaching. I make mistakes as a teacher almost every day. Sometimes more than once.<p>It's a good topic for thought, and even though I think it was the wrong move, it makes me question and think about my methods and the ways I relate to people. So maybe it wasn't so far off the path of leading-not-preaching as I thought. |
IOTA Surges Past Ripple | I feel like sharing a curious e-mail encounter I had with IOTA founder David Sønstebø.<p>After writing a blog post about the limitations of the Bitcoin blockchain, which got some attention on HN[1], I received an e-mail by someone named David (I didn't know who he was at the time), asking me:<p>> Hey Rune,<p>> I read your interesting article analysis of Bitcoin limitations. I was wondering if you have had any time to check out IOTA (www.iota.org) and the p2p Flash Network within it (<a href="https://blog.iota.org/instant-feeless-flash-channels-88572d9a4385" rel="nofollow">https://blog.iota.org/instant-feeless-flash-channels-88572d9...</a>)?<p>> [..]<p>Assuming this was just some random person asking for my opinion on IOTA, I replied with a critique, pointing out two weaknesses I was familiar with:<p>> The only thing I know about IOTA is that it’s centralized, through the so-called Coordinator. This makes it uninteresting to me. As I understand, the IOTA team argues that this is just a preliminary precaution, which will change as the size of the network grows, but I’m skeptical of this claim.
In any case, until it actually becomes decentralized, I don’t think I will have enough interest in it to learn how it works. So I think I will wait for this to happen before taking the time to learn about the system.
Also, I know that the IOTA team designed their own hash function, which turned out to be vulnerable to collision attacks, which sounds rather amateurish to me.<p>He promptly replied to this critique -- which I thought I was offering someone seeking advice on the soundness of IOTA -- informing me that<p>> As the founder of IOTA I can answer these questions: [...]<p>Proceeding with a rebuttal[2] of the weaknesses I had pointed out, even though <i>he</i> was the one who had asked me, despite the fact that he didn't need any information about IOTA at all in the first place (given that he created it).<p>[1] <a href="https://news.ycombinator.com/item?id=15427662" rel="nofollow">https://news.ycombinator.com/item?id=15427662</a><p>[2] > 1) The Coordinator is quasi-centralized, you as a programmer can easily opt out of it if you want, it is not enforced upon you as a user, it's simply a "best practice" at the moment. Tangle is made to scale, so the argument is indeed that the Coordinator is only now in place to prevent against the 34% attack that all DLTs suffer from until the network has scaled. I don't see anything controversial about this, it is the only way to reach a truly decentralized scalable ledger. It is no different from Satoshi firing up his first miners to get the Bitcoin network to work in the first place. The Coordinator will very shortly also be distributed to consist of numerous nodes, at which point it will be a lot more decentralized. These are all well-known steps towards the long-term goal. IOTA never claimed to be production ready, nor do we do any handwavy nonsense like Ethereum and Bitcoin core mantras "we'll solve it with some computer science breakthroughs in the future", IOTA's roadmap is very simple and straightforward.<p>> 2) The hash function story has been so misrepresented and blown out of proportion it is comical. I'll spare you the pointless drama and conflict of interest from the guys carrying out the hit piece and only focus on what is important, in short: We spoke with the Keccak team back in 2014 about creating a trinary sponge based hash function for the inevitable arrival of trinary processors (we have designed our own as well) as ANNs, photonics, spintronics etc. all favor trinary over binary, hence we need a trinary hash function that is lightweight for IoT which utilize such chips. 'Curl' was born. The best way to thoroughly vet a hash function (which is vital) is to put it out there with a big incentive to crack it. This is what we did. Even if someone had broken the hash function entirely (no one ever did) there was no threat to the network due to the precautionary steps in place, and the fact that we had Keccak as back up as safety precaution #10. Right now we are working with several world-leading cryptographers, including your fellow countrymen of <a href="http://cybercrypt.dk/company/" rel="nofollow">http://cybercrypt.dk/company/</a> on further developing and optimizing Curl. This is far from amateurish, it is simply leading the way through genuine invention. Lightweight hash function development is a very active field of research.<p>> 3) We also invented full Proof of Stake, the first decentralized exchange, the first decentralized voting protocol, decentralized marketplace in 2013, pioneered using blockchain for ID, Supply Chain and IoT in 2014-2015. So while we're known to push boundaries, we have so far always been vindicated later on. IOTA being the first ledger to disrupt blockchain itself is a testament to this, but is naturally very hard to swallow for a lot of blockchain maximalists. However, when it comes to actual researchers the reception tends to be very positive, and as for companies we work with everyone from Cisco to Maersk to Bosch to Microsoft to IBM to Statoil, so outside of the niche cryptosphere the interest is mounting daily, due to the fact that they have concluded that they can't use blockchain due to its inherent scaling and fee limitations. I mention this so you realize that we didn't just hack some random shit together like most people do in this space.<p>> 4) [...] |
Error handling in Upspin | This blog is worth a read even if you don't care about Upspin or Go. Some details are certainly specific to Upspin (and they're marked as such), but there are also patterns worth thinking about for programmers of any background.<p>My favorite highlights:<p>> it is critical that communications [between Upspin servers] preserve the structure of errors<p>YES. This is something that's wildly underconsidered in most projects and indeed most programming languages. <i>Errors need to be serializable</i>... and <i>deserializable, losslessly</i>. Having errors which can serialize round-trip is a superpower. It unlocks the ability to have -- well, the blog gets to it later:<p>> an <i>operational</i> trace, showing the path through the elements of the system, rather than as an <i>execution</i> trace, showing the path through the code. The distinction is vital.<p>YESSS. From personal anecdote: I've been writing a collection of software recently which is broken up into several different executables. This is necessary and good in the system's design, because they have different failure domains (it's nice to send SIGINT to only one of them), and they also need to do some linux voodoo that's process-level, yada yada. The salient detail is that in this system, there are typically three processes deep in a tree for any user action. That means I <i>need</i> to have errors from the third level down be reported <i>clearly</i>... <i>across process boundaries</i>.<p>An operational trace is the correct model for this kind of system. Execution traces from any single process are relatively misguided.<p>> The Kind field classifies the error as one of a set of standard conditions<p>This is something I've seen emerge from <i>many</i> programmers lately, independently (and from a variety of languages)! There must be something good at the bottom of this idea if it's so emergent.<p>The "Kind" idea is particularly useful and cross-cutting. They're serializable -- trivially, and non-recursively, because they're a simple primitive type. And, per the guidelines of what would be useful in a polyglot world, they're also virtuous in that they're pretty much an enum, which can be represented in any language, with or without typesystem-level help.<p>(I also have a library which builds on this concept; more about that later.)<p>---<p>Now, some things I disagree with:<p>Right after describing the importance of serializable errors, Pike goes on to mention:<p>> we made Upspin's RPCs aware of these error types<p>This is a colossal mistake. Or rather, it's a perfectly reasonable thing to do when developing a single project; but it severely limits the reusability of the technique, and also limits what we can learn from it.<p>I'm a polyglot. I'd like to see a community -- with representatives from all languages -- focus on a simple definition of errors which can be reliably parsed and round-tripped to the wire and back in <i>any</i> language. For gophers, a common example should be Go and Javascript: I would like my web front-end to understand my errors completely!<p>I think this is eminently doable. Imagine a world where we could have the execution flow trace bounce between a whole series of microservices, losslessly! It's worth noting, however, that it would require an investment in keeping things simple; and in particular, controversially, <i>not</i> leaning on the language's type system too much, since it will not translate well if your program would like to interact with programs in other languages. Simplicity would be key here. Maps, arrays, and strings.<p>---<p>Lastly, I said earlier I have a library. It's true. (I have two libraries, actually; one in which I tried to use the Go type system more heavily, and included stack traces -- both of which I've come to regard as Mistakes over time and practice; I'll not speak of that one.)<p>Gophers might be interested in this, but also I'd love commentary from folk of other language backgrounds:<p><a href="https://godoc.org/github.com/polydawn/go-errcat" rel="nofollow">https://godoc.org/github.com/polydawn/go-errcat</a><p>This library is an experiment, but using it in several projects has been very pleasing so far. It has many of the same ideas as in this blog, particularly "categories" (which are a clear equivalent of "kinds" -- what's in a name?). Some ideas are taken slightly farther: namely, there is <i>one</i> schema for serializing these errors, and it very clearly coerces things into string types early in the program: the intention of this is to make sure your program logic has exactly as much information available to it as another program will if deserializing this error later; any more information being available in the original process would be a form of moral hazard, tempting you to write logic which would be impossible to replicate in external process.<p>The main experiment I have in mind with this library is <i>what's the appropriate scope for a kind/category?</i> Does it map cleanly to packages? Whole programs? Something else entirely? |
Ask HN: I'm a solopreneur and I feel demoralised | Creating an anonymous account to respond for obvious reasons as well...<p>I've been a <i>de facto</i> solopreneur for a couple of years now, and I deal with the same feelings every day. I don't particular like the market I'm in—I really don't think there's any kind of social good that comes out of it (arguably its a social ill), although on an individual basis I do like my customers. I have a partner who's been basically AWOL since taking a couple of new jobs (consecutively—neither of us was full time and she's an attorney in her day job). Its basically on me to handle not just the tech but sales, marketing, design, etc, most of which I'm really not good at (everything but tech, basically.) On top of that, because of the way we're structured and the agreement we negotiated when I started trading time for equity, back when she was more active, she's getting about 30% more of the cash-out from an impending sale.<p>(side rant: I used to be opposed to vesting schedules on a company that's bootstrapped—no more. If you have one or more partners, you need to be on a vesting schedule. Period. There's a reason those are a thing. As far as that goes, I basically made every mistake you can make in getting the company to this place—the fact that I'm even seeing a prospective payday is almost in spite of the mistakes we've made, not because of anything special we did. Someday I'll be able to tell that story.)<p>On top of all <i>that</i> I rolled out the new version of our platform a month ago and overall its not going great. It was premature but because its been delayed several times and our work is seasonal, there was a hard deadline on getting it out and I had to release it missing several features that were part of the old product. (As I'm writing this, I'm cringing—another rookie mistake from someone who ought to know better.)<p>All that to say, you're not alone in finding yourself in this position. Something I spend a lot of time thinking about when I'm not fantasizing about accidentally stepping in front of a bus is what gives me meaning and self-worth, and how fucked up it is that so much of that is tied to my career. I'm also a father (so no, won't be intentionally accidentally stepping in front of any busses) and a husband, and when I'm being objective, a friend to many and mentor to more than a few. There's more to me than just my career, but damned if I could prove it right now.<p>So now I'm really trying to consider how to claw my way out of feeling this way. I see both a therapist (two, actually—one personal and one couples) and a psychiatrist, I'm on an antidepressant, and before anyone goes dropping medical advice without a license— I've got years of experience with different meds so yes, I'm certain this one is the right one for me. No, I'm not exercising or getting enough sleep, and yes, I recognize that's likely part of the problem but its difficult mental hurdle to get over to take time away from (unproductively) coding. As I'm writing all this down it sounds even worse.<p>Sorry, I keep trying to get to my point but I get sidetracked (also I'm super ADD, and depression and sleep habits accentuate that). My therapist and I talk a lot about how people with a certain kind of mind/mindset need stimulation, and part of the problem for me is that the work simply isn't stimulating. I've got too much free time (until I've got not enough) and I haven't always done a great job at finding something new to engage in on the side. I picked up a new platform a couple of years ago, which has been a lot of fun to work in (but then I decided to port our platform over to that, which arrested all forward momentum product-wise—like I said, I've made every mistake you can make as a company here...) Maybe what you need is something engaging to start your day with—learn AI (that's what I keep promising myself after this exit happens, assuming it does) or get into some side of technology you've always been less good at. For me that's hardware—I dropped out of my CS program well before we got to the more advanced circuits classes, so while I understand digital logic the EE side of things is a black box for me I've always wanted to grok.<p>Maybe you just need more work to do—sounds like you've got a lot of free time, thought about consulting work or mentoring at an incubator? As boring a the business may be, I'm certain you've learned a lot of things the hard way, and that's valuable knowledge to share. Maybe if you put something in the afternoon that you really want to do, getting work out of the way early would be easier. But also, maybe you're just not a morning person. If your company is making you enough money, and it really only takes 3-4h/day, is it so bad you don't start til the afternoon? Some people just don't click in til then. Also worth noting, the average american worker only spends about 2-3h/day doing actual work [<a href="https://www.inc.com/melanie-curtin/in-an-8-hour-day-the-average-worker-is-productive-for-this-many-hours.html" rel="nofollow">https://www.inc.com/melanie-curtin/in-an-8-hour-day-the-aver...</a>]<p>Getting out of the house would probably help some too. Do you have a dedicated work environment? I bought a house recently and one of my favorite things about it is that I have a dedicated office now. (I also get an inordinate amount of pleasure out of yard work, which is difficult to explain. When I'm not fantasizing about bus factors, I fantasize about pulling weeds. Go figure.)<p>I've been kind of thinking about starting an accountability group or something with a couple of friends. That might be something to consider.<p>Anyway, I'm sorry to hijack your post with my own problems. I hope there's a kernel of something in there that points you in a direction that makes you feel more fulfilled. Best of luck, friend. |
The Case for Not Being Born | It starts with the death of companion animals. Not when you are young and comparatively resilient, but later in adult life when you are personally responsible for all of the decisions and costs, the management of a slow and painful decline. The futile delaying actions, the ugly realizations, the fading away, the lost capacity, the indignities and the pain, and all in an individual fully capable of feeling, but who lacks the ability to comprehend what is happening, or to help resist it. One slowly realizes that this is just a practice run for what will happen to everyone you know, later, and then to you. Ultimately it comes to euthanasia, and one sits there looking down at an animal who is a shadow of his or her previous self, second guessing oneself on degree of suffering, degree of spark and verve remaining. It is rarely a clear-cut choice, as in most companion animals the body fails before the mind. When it is clear, and your companion is dying in front of you, you will rush, and later chew it over for a long time afterwards; did you wait too long, could you have done better?<p>At some point you will ask yourself: why am I trying to maximize this life span? Why am I playing at balancing capacity against suffering? Why have I not just drawn an end to it? Why does it matter if a dog, a cat, another animal exists until tomorrow? Next year the animal will be gone without trace. In ten thousand years, it is most likely that you will be gone without trace. In a billion years, nothing recognizable will remain of the present state of humanity, regardless of whether there is continuation of intelligence or not. The great span of time before and after cares nothing for a dying companion animal. There is no meaning beyond whatever meaning you give to any of this, and there is a very thin line between that and the belief that there is no meaning at all, the belief that there is no point. If the animal you lived with will be gone, what was the point of it all? If you will be gone, why are you so fixated on being alive now, or tomorrow, or some arbitrary length of time from now?<p>It starts with companion animals, and it gnaws at you. The first of the cats and dogs you live with as the responsible party, the thinking party, the one motivated to find some meaning in it all, arrive and age to death between your twenties and your forties. That is traumatic at the end, but you find it was only practice, because by the end of that span of time, the first of the people closest to you start to die, in accidents and in the earlier manifestations of age-related disease. The death by aging of companion animals teaches you grief and the search for rationales - meaningless or otherwise - and you will go on to apply those lessons. To your parents, to mentors, to all of those a generation older who suddenly crumple with age, withering into a hospital or last years in a nursing facility. You are drawn into the sorry details of the pain and the futile attempts to hold on for the ones closest to you, a responsible party again. You are left thinking: why all this suffering? Why do we do this? What does it matter that we are alive? The span of a billion years ahead looms large, made stark and empty by the absence of those dying now, no matter how bustling it might in fact prove to be.<p>Grief and exposure to the slow collapse of aging in others: these are toxins. These are forms of damage. They eat at you. They diminish you, diminish your willingness to engage, to be alive, to go on. I think that this burden, as much as the physical degeneration of age, is why near all people are accepting of an end. The tiredness is in the mind, the weight of unwanted experiences of death by aging and what those experiences have come to mean to the individual. Human nature just doesn't work well under this load. It becomes easy to flip the switch in your view of the world: on the one side there is earnest work to end future suffering by building incrementally better medical technology, while on the other side lies some form of agreement with those who say that sadness and suffering can be cured by ending all life upon this world. Oh, you might recoil from it put so bluntly, but if you accept that existence doesn't matter, then the gentle, kind, persuasive ending of all entities who suffer or might suffer lies at the logical end of that road. It is just a matter of how far along you are today in your considerations of euthanasia and pain. This is the fall into nihilism, driven by the urge to flee from suffering, and the conviction that your own assemblies of meaning are weak and empty in the face of the grief that is past, and the grief that you know lies ahead.<p>Not all of the costs of the present human condition are visible as lines upon the face. |
The importance of dumb mistakes in college | I’m in college, and have made my fair of mistakes that I think are “dumb” in the eyes of my peers, family, friends, society, etc, but not many that I think are “dumb” in my eyes. The most prominent mistake that I can think of? I’m spending a fourth year at a community college rather than having transferred to a university after my second or third year. I graduated high school in June 2014. I should have graduated college in May/June 2018. But now I will most likely be graduating May/June 2020 (heck, maybe a few semesters later).<p>My life as of now has not turned out the way I had planned it. I have yet to have an internship under my belt. I have no connections. If you will, I’m a twenty-two-year-old modern day George Costanza prior to him getting that job with the New York Yankees - when he’s living with his parents and has no job. Yet I think I’ll be just okay.<p>But above all, I would like to advise other people, not only colleges students but to those who think they have made “dumb” decisions: Please do not worry. Modern mass culture, or as the film maker Tarkovsky writes, the civilization of prosthetics, cripples people's souls and makes people's mistakes/problems bigger than they actually are. Though I recall a year ago, struggling alone and having no hope. What guided me towards today? Rainer Maria Rilke’s Letters to a Young Poet. Here are a few quotes from the book:<p>"Everything seems to me to have its just emphasis; and after all I do only want to advise you to keep growing quietly and seriously throughout your whole development; you cannot disturb it more rudely than by looking outward and expecting from outside replies to questions that only your inmost feeling in your most hushed hour can perhaps answer."<p>"Consider yourself and your feeling right every time with regard to every such argumentation, discussion or introduction; if you are wrong after all, the natural growth of your inner life will lead you slowly and with time to other insights. Leave your opinions their own quiet undisturbed development, which, like all progress, must come from deep within and cannot be pressed or hurried by anything. Everything is gestation and then bringing forth. To let each impression and each germ of a feeling to come to completion wholly in itself, in the dark, in the inexpressible, the unconscious, beyond the reach of one's own intelligence, and await with deep humility and patience the birth-hour of a new clarity: that alone is living the artist's life: in understanding as in creating."<p>"If you will cling to Nature, to the simple in Nature, to the little things that hardly anyone sees, and that can so unexpectedly become big and beyond measuring; if you have this love of inconsiderable things and seek quite simply, as one who serves, to win the confidence of what seems poor: then everything will become easier, more coherent and somehow more conciliatory for you, not in your intellect, perhaps, which lags marveling behind, but in your inmost consciousness, waking and cognizance. You are so young, so before all beginning, and I want to beg you, as much as I can, dear sir, to be patient toward all that is unsolved in your heart and try to love the questions themselves like locked rooms and like books that are written in a very foreign tongue."<p>"Live the questions now. Perhaps you will then gradually, without noticing it, live along some distant day into the answer."<p>"We know little, but that we must hold to what is difficult is a certainty that will not forsake us; it is good to be solitary, for solitude is difficult; that something is difficult must be a reason the more for us to do it."<p>"And if there is one thing more that I must say to you, it is this: Do not believe that he who seeks to comfort you lives untroubled among the simple and quiet words that sometimes do you good. His life has much difficulty and sadness and remains far behind yours. Were it otherwise he would never have been able to find these words."<p>"There is perhaps no use my going into your particular points now; for what I could say about your tendency to doubt or about your inability to bring outer and inner life into unison, or about all the other things that worry you —: it is always what I have already said: always the wish that you may find patience enough in yourself to endure, and simplicity enough to believe; that you may acquire more and more confidence in that which is difficult, and in your solitude among others. And for the rest, let life happen to you. Believe me: life is right, in any case." |
Good engineers make terrible leaders | People who try to box engineers into some sort of hyper-rational over-simplified box to separate them from real world messy problems are silly.<p>I suspect the underlying reason is that once you start measuring things, you tend to learn thing fast and that typically shakes up the status quo in a way that disciplines "far away" from the hard sciences aren't emotionally ready for. You don't question their sacred cows.<p>Let me give you a software example which shows how basically even the "tamest" seeming problem in software engineering is pretty "wicked", requiring some of the same skills you'd use to tackle a social issue.<p>Is website A faster than website B? What do we mean by fast? Sure we agree we can use a clock, and to bucket relativistic effects we can talk about things on planet earth. So if you're sitting there in front of a computer with a clock, what should you measure to tell if website A is faster than website B?<p>The answer is nobody really knows but some answers seem better than others. The most simplistic measure is to measure the time from when a request is sent to when the last byte of the response is received. But once we have a number we have to ask what it means in terms of "faster" and if it's the right number.<p>Some single page apps these days are really fast by that measure. They just give you a static shell, that would sure beat the pants off just about any other webpage download. Of course for single page apps giving you a static shell, the initial download is not the end, but the beginning. They start to load css and javascript and images, etc.<p>We might say that page download time is a poor metric because if I'm some poor schmuck trying to visit a page, the time that the html took to download doesn't really mean anything to me.<p>For a while people thought time to first byte was more interesting. Of course it really means nothing, but in some cases, for a short while (streaming pages from jsps, etc) before mvc became the predominant paradigm, a good ttfb was correlated with sending some html that the browser could get to work on, allowing it to render content sooner, which seemed like a good thing.<p>Do you see the wickedness here? In order to even tackle the problem we have to invent measures and targets, even though this is a "tame problem" with a objective stopwatch for measuring.<p>We kind of "knew" the right measurement was one that the human interacting with the web page would intuitively understand as "website A is fast". How long until they could see the page, how long until they could "touch" it and interact with it?<p>As people started doing more javascript, the approximation of TTFB stopped being as useful. There were easy things to measure, domready and page loaded. Dom ready meant that a page had loaded enough for javascript to begin interacting with a "complete" document. So almost by definition it wasn't a good measure since it marks the beginning of the lifecyle of an AJAX app or client side mvc or single page app or whatever we're calling it next week. Content loaded often was too conservative of a measure. It didn't fire until the last image had downloaded and rendered. That could be below the fold and as they say: "out of sight out of mind".<p>So at this point if we want to answer the question: "is website A faster than website B?" we only know the real answer is somewhere between 2 numbers.<p>Of course facebook cheated. They attached event handlers to the body pre-domloaded that just appended events to a Array (queue) so that people could see a button and click on it and even though the "app" wasn't done rendering, you could say: "a customer could see the page and interact with it without those interactions being lost".<p>So if we want to measure "when can I see a webpage and start interacting with it?" it's a pretty tough question that might not have an answer. The best we can do is say: "you loaded a web page at time t0 and at time t1 you performed an action and it (did|did not) work. We could compare those 2 things... if the action was something you could do on both websites.<p>You can make videos of websites rendering and visually compare them.<p>I've spent my career as a performance engineer and I don't really know how to, in the general case, answer the question: "is website A faster than website B". In specific cases it's sometimes clear. For example I could visit a static html page with no buttons and say: "it was definitely loaded in 50ms" and then visit some monstrosity of a single page app that displays a loading screen for several seconds and conclude: "the static webpage was faster than the app".<p>But I'm sure designing a bridge is sooo much simpler and there's no ambiguity there.<p>Right? |
Ask HN: How do you stay focused while programming/working? | I don't know if this will help you at all, but perhaps it will.<p>First off, I think there's a myth about productivity. That you are only productive when you are writing code. That's not true.<p>The way I work is that I need to have a fully formed idea of the solution to a problem before I start writing code. That doesn't mean it's the correct solution. But I have to start with an idea of how I'm going to solve the problem.<p>Sometimes that takes 10 seconds, if it's a new feature. Sometimes it takes 10 days if it's a bug I don't understand. Sometimes it's 10 hours for a totally new project. Could be anything. But I always start with documenting things in my way: offline. Your notebook and a good pen are your friends.<p>I do my best thinking at this state away from the computer and away from code. I do it best walking around. Once I think I know how to solve the problem (usually wrong for larger projects and bugs; usually right for new features in an existing project), write down what you intend to do on paper. I know Project Management tools exist for this task, but they don't have the same connection for me as physically writing something down.<p>As I'm doing that writing, I'll realize a lot of things that are bad about this approach and correct myself. And then my brain will start to organize tasks and group them. While I'm going through this exercise, I will set checkpoints for myself. If it's a ticket for a bug that needs to get fixed today, well, maybe I'll get lucky, and it's one and done. Other times, not so much.<p>I also try to organize my work segments vs. my thinking segments around my meeting schedules. Because even when my day is broken up by meetings, I'm still probably thinking about my problem.<p>Building in your checkpoints by planning your work this way has been very helpful for me, not because I need to check facebook or twitter or HN, but because it represents tangible progress.<p>But if you actually look at my workflow, it's a constant iteration of think a lot, followed by pounding out some code for a few minutes. Sometimes that ratio is very small on the code side, and sometimes it's quite large on the code side.<p>Without making comments as some other people have about medical or mental problems, I would suggest this: that there's a lot to be gained from working in very small chunks and that if you're worried about momentum, perhaps that's the problem you should be trying to address. Momentum is a tremendously overloaded word, and you shouldn't be trying to evaluate yourself based on your ability to obtain or maintain it.<p>I have a fairly similar pattern to what you are talking about. Solve a thing; then think about the next thing; then solve that. No one has ever complained about my productivity. On the contrary, people sometimes wonder how I can get so much done when I spend so little time "working."<p>Different people work in different ways, and I don't think that's a thing to worry about. I've really never very much experienced what people call the zone in programming. I certainly do as a musician when I'm practicing my violin. My girlfriend hates when I practice violin because I absolutely cannot be taken out of that mental space.<p>As a developer, I don't care about the context switch or the momentum or focus. I can go back and forth. No big deal. So I can understand why this can be problematic. But it might not be.<p>Questions to ask: are you being told that you aren't productive enough? Are you feeling like you aren't doing good enough work? Or enough good work? Where is this criticism coming from? Is it external or internal? Why do you want to change your patterns?<p>One of the things that technology companies need to realize is that there are more ways to be productive than just by being in your chair.<p>I'm my most productive when I'm not writing code. I'm my most productive when I'm solving the problem. And then the code is just a translation of what I've figured out.<p>When I follow this path, I find that my thoughts are more often wrong than right, and I don't have to worry about getting up and thinking or rewarding until I've fixed my thinking about the problem and proven it to myself through code and tests. And when I get into that mindset, I can't stop until I've corrected myself and crossed everything off the list in my notebook.<p>Take that for what's it's worth. I'm just one person. And I could be wrong about all of this. But I think I understand a little of what you're talking about, and this is how I deal with it. |
The State of JavaScript 2017 | If you view source and check the web console you'l see a link to a quiz:<p><pre><code> /\
----+---- / \ ----+----
+----+ +------- | / \ | +------- +----+
|the | | | /------\ | | | of |
+----+ +------+ | / \ | +----- +----+
| | | |
+-----+-+-----+ -------+ | +----+ | +-------
+-----+ +-----+ +-+----+-+ ( ) ++
| | | | ' ||
| | +--+ +-+ +-+ +--+ | | ++---+ +-+/--+ +++ +++--++ |+--+
| | +++--+++ \\ // +++--+++ +-+----+ +++---+ +-----+ ++| ||+--+++ |+--+
| | || || \\ // || || +----+-+ || || || || || ||
| | +++--+|| \V/ +++--+|| | | +++---+ +-++-+ +++-+||+--+++ ++--+
| | +--+++ V +--+++ | | ++---+ +----+ +---+||+--++ +--+
+-+ +-+----+-+ ||
+-+-----++ +----+ ||
+-----+ /\/\ -----+ +---+ /\/\ ++
/ / / | +----+ | | | \ \ \
\/\ / +----+ | | | | \ /\/
/ | | | | | \
/ | | | | | \
+----- +----+ | |
</code></pre>
Will you be wise enough to escape the JavaScript Jungle? Take the challenge to find out!<p><a href="http://bit.ly/2yVkZNc" rel="nofollow">http://bit.ly/2yVkZNc</a> |
Game Theory: Open Access textbook | I use Game Theory in pretty much everything (academic)
Here are some thoughts on the literature.<p>There are several different strands and evolutions of Game Theory.<p>1. Game Theory (non-cooperative):<p>The basis was Neumann/Morgenstern Theory of Games. It has been suggested in this thread, however its focus is a bit obscure today. Still useful for repeated games, for example. Both authors are also important for Decision Theory, see below.
Afterwards came Nash, defining the what the basic solution concept would be up until maybe 1990. Simple Nash equilibria are used primarily where rational agents choose in mathematically nice spaces where uncertainty is not a major factor.<p>Following Nash, the Game Theory literature developed to produce equilibrium refinements.
These, usually subsets of Nash equilibria, were created because Nash often predicts very little - the space of equilibria is often so large that nothing can be learned, or uncertainty requires the incorporation of different information sets of agents.
The first developments came while incorporating uncertainty and multi-stage games (where people move in sequence).
Harsanyi was able to show that most configurations of uncertainty situations can be represented as a Bayesian Game (the issue was the recursion of "he knows, that I know, that he knows that I know..."). The problem became, that these often produced unintuitive and large sets of equilibria.
So we have refinements. Some target robustness, like Selten's Trembling Hand. Others target "natural behavior", empty threats and so on.
Almost all of those refinements are a subset of a Nash concept. The development of refinements was en vogue prior to the 90's, when it stopped for reasons I will detail below. Basic Nash has survived, however, and is still the go-to tool to understand multi-agent decision problems (at least initially).<p>1.a Cooperative Game Theory:<p>Largely in parallel, mathematicians like Shapely and later economists like Roth also tried to think about cooperative games. Here, we don't look directly at what individual people do in isolation, but rather what groups are stable and plausible and what they can achieve. If for example a smaller group can "break" a coalition, then such a large coalition can not be considered a plausible solution. Matching theory comes from here, for example, so you will find it in most problems of assignment (say, students). Much as non-cooperative Game Theory, it is applied widely.<p>2. Decision Theory:<p>Decision theory developed in parallel and is a wide field. It is, however, critically important to Game Theory because it sets the stage for information, constraints and decisions that agents take. Expected utility, by Neumann and Morgenstern, was and is the basic instrument to understand how agents incorporate their knowledge. This was based on objective probabilities, so in parallel the Bayesian stream also developed. With a monumental and beautiful proof, Savage then developed Bayesian Decision Theory (based on works by de Finetti and others). This is critical to many, many fields in maths, statistics and science in general, and was then the basis for Game Theory. Aumann is associated with latter refinements of decision theory.
Later on, the idea of uncertainty (Knightean uncertainy) became important. This is when you can not assign a probability to an outcome. Paradoxes by Elsberg and Allais have shown that this is actually an important decision problem in real life. Multiple approaches exist to generalize Decision Theory, such as Prospect Theory, MinMax Preferences, integration by capacities as opposed to measures. Schmeidler, Gilboa and Wakker are some names.
Game Theory exists in this space as well.<p>3. Evolutionary Game Theory:<p>The idea came from Biology and is important because it is a way to justify Game Theoretic outcomes without even requiring purposeful action by agents. It had a huge impact on many problems, especially dynamic ones and "top down" models, but did not surplant traditional Nash in general. Some scientists believe it should. Other's think it's just one more tool. There are those who believe the whole of social sciences should be based on it... Let's say it did not achieve that yet.<p>4. Economic applications:<p>Economics was historically the discipline to apply Game Theory most. Initial concepts like Nash justified many early models of Markets. Earlier concepts of non-perfect competition were formalized with Game Theory.
Things really started to take off when asymmetric information were introduced. Think Moral Hazard, Signaling Games, Contract Theory and so forth. What we know about economics, organizations, business, competition and many social phenomena today has largely been developed by applying Game Theory. There are too many great names to mention: Akerlof, Tirole, Spence, Hart, Homström, Myerson, Stiglitz.<p>5. Mechanism Design and Auction Theory<p>In the 70's and 80's, from the above applications, economists like Hurwitz, Myerson and Maskin developed mechanism design.
The idea is simple and genius: If agents play games, what if we can choose the game they play? Which game do we choose without them walking away, but with us getting the desireable outcome? What is, in other words, the optimal mechanism inducing the agents to play a game?
Initial examples and todays shining example of econ in action is Auction design. Which sort of auction mechanism is best to sell ads, be ebay or assign broadband licences?
Mechanism design leads to very complex problems, which is why until the early 90's many simplified assumptions were used. While mechanism design has been very useful, this also lead to two developments. In econ, papers started to get more and more complex to accomodate real life issues like non-monetary transfers, dynamics, complex type sets and so forth. Computer scientists trying to implement mechanisms quickly discovered that many were simply to complex, so they started Algorithmic MD.<p>6. Experimental and behavioral games:<p>So earlier, I said that a whole cataloque of equilibrium refinement basically died out. Why is that? Well, with behavioral econ we were introduced to more realistic approaches to decisionmaking. Then questions arose, such as "what if I can not count on rationality of my competitor". As it turns out, this may actually break the inference of Nash equilibria pretty handily.<p>At the same time, economists and psychologists put people in experiments to play games. In some situations, Nash worked well. In other situations, one could accomodate much by using more complex Decision Theory.
But in many instances, people would just not play Nash. In other words, they couldn't even figure out the most basic solution concept. Indeed one can do all sorts of experiments in a Game Theory 101 class showing that people often choose much too heuristically.
Equilibrium refinements make Nash more complex, it was clear they had to be abandoned. Currently, research joint in decision theory and game theory works on finding better ways to model behavior when Nash is not reached.<p>Books:<p>Osborne/Rubinstein has been mentioned. Contrary to what was said, this is an undergrad book and a solid intro.
There are two classic works. The major one is by Tirole and Fudenberg, the other is by Myerson. The former is more standard, the latter is better.
Now there is a new book by Maschler, Solan and Zamir with like 900 pages. It's really good, and I would definitly get it as a second book after an intro.<p>For Mechanism Design, the best book is by Tilman Boergers. It's also free to download.
Auction Theory specifically has a standard volume by Krishna.
Both of those are math heavy. This is true in general, but Game Theory concepts can often be explained by intuition. For Mechanism Design, I fear that a solid math background would be required, because the space of "choosing a game" is mathematically not so nice. However solid means you should have a good grounding in analysis and optimization, perhaps dynamic systems. Basically, a math heavy undergrad education will be fine.<p>hope this helps |
F.C.C. Repeals Net Neutrality Rules | I’m surprised I haven’t seen more mention of the Neighborhood Network Construction Kit: <a href="http://communitytechnology.github.io/docs/cck/" rel="nofollow">http://communitytechnology.github.io/docs/cck/</a><p>I believe it has been partly created by the folks who are building their own internet service in Detroit (more about that at <a href="https://motherboard.vice.com/en_us/article/kz3xyz/detroit-mesh-network" rel="nofollow">https://motherboard.vice.com/en_us/article/kz3xyz/detroit-me...</a>)<p>While the fight rages on for the major providers to commit to being open and fair, I believe it is probably very prudent to simultaneously begin sorting out how we could go about 1) fostering competition and 2) creating community-backed local networks and making them appealing enough (even if just to us tech folks at first) that they start to catch on. If we start at the foundational level (i.e. getting peoples’ homes connected or connectable on a locally-controlled network, wireless or otherwise, regardless of whether that network still links up to a major provider’s backbone in turn, which it would), then we are in a better position to then start looking at linking up those local networks directly to one another (forming regional networks, etc) and also to backbones provided by companies/organizations that commit (in writing) to an open and fair internet. To that last point, I think it might be worthwhile to also explore the possibility of a non-profit organization with values similar to Mozilla finding a way to purchase, build, or otherwise control some of the internet backbone/internet access in America (forgive me, I know very little of what’s all involved at that level, I'm sure it's a herculean task).<p>In any case, this all seems daunting. But I propose that the initial approach is to start small, start local, but use a multi-pronged strategy (e.g. crowdfunding for projects, raising up wireless community networks, advocacy and marketing help for fair and privacy-conscious ISPs, exploration of non-profit backbone formation, etc), and pick up momentum.<p>If there is no real market competition, and we’re subject to monopolies, and those monopolies directly go against the loudly voiced will of their customers and what appears to be the majority of American citizens, then let’s give ‘em hell. It’s not a short-term project. But everything starts somewhere…<p>On my soapbox
-------------
We are talking about who controls access to free speech here. As benign as it may seem to some people that an internet provider might be allowed to throttle some bandwidth and block some sites, their monopoly nature means that, under these new rules, they pose a threat of direct censorship of speech that reaches the masses, which in turn directly threatens the liberty of the American citizen. It’s important, both for us and for future generations, to fight this tooth and nail and to even go so far as to rebuild internet access ourselves over the course of years under a new charter if that’s actually what it takes. The internet is the greatest free speech tool we have as citizens. And regardless of whether we believe in regulation or de-regulation, the reality is that a group of monopoly controllers of internet access pushed hard for rules allowing them to throttle, censor, and use our data in ways that make many of us feel uneasy. They wouldn't push that hard if they didn't intend to use these allowances in some way. It’s a legitimate threat.<p>Overcoming this threat is a cause that’s worth thinking about, and acting on behalf of, in big and bold ways. And perhaps we could also solve for some of our ongoing privacy concerns along the way. Because, my god, what person does not wonder if we are slowly sliding away from being citizens who are truly free to speak our minds and not be spied on arbitrarily with a privacy situation such as we are facing, a situation which is already unreasonable and is getting worse.<p>Aren’t you tired of being leveraged against?<p>Hope is not lost. We just have to take it into our own hands and fight the fight. Because that’s what happens when you’re tired of it.<p>In the interim…<p>...while this gets started, we need to compile a list of internet service providers who will commit in writing on their customer agreements that they will not block or throttle access to content which is lawful. Perhaps we could also find providers willing to commit in writing that they will treat their users’ data as private, not sell our data to third parties, etc. We then need to become loud advocates for these companies. We need to effectively help them with their marketing by raising their visibility up and by encouraging people to switch to them. Imagine how many people would be interested in starting an ISP if they knew that they would get free marketing!<p>In other words, I propose that we mourn the state of things quickly and then transition into action. If nothing else, that might just be the most effective form of protest we could engage in. In fact, I think this might be the form of protest that works best today in a variety of realms...don’t just hold signs and march, don’t just voice frustration in venues, instead simply begin creating what you want to see...and don’t give up. |
REST is the new SOAP | > <i>You want to use PUT to update your resource? OK, but some Holy Specifications state that the data input has to be a “complete resource”, i.e follow the same schema as the corresponding GET output.</i><p>If you were working in a strongly typed language with RPC calls, you would see the same problems, or symptoms thereof. For example, if you had the two RPC calls storeFoo and retrieveFoo, you'd expect them to both take Foo objects, no? Something like,<p><pre><code> storeFoo(name: String, foo: Foo)
retrieveFoo(name: String) -> Foo
</code></pre>
and PUT/GET in HTTP yearns for the same dichotomy.<p>> <i>So what do you do with the numerous read-only parameters returned by GET (creation time, last update time, server-generated token…)? You omit them and violate the PUT principles?</i><p>Yes, and just call it POST, since it's no longer symmetric and violates PUT principles. REST has nothing against POST. Again, this problem would be reflected similarly in RPC:<p><pre><code> storeFoo(name: String, foo: PartialFoo)
retrieveFoo(name: String) -> Foo
</code></pre>
(or perhaps storeFoo ignores some fields on Foo, etc.)<p>> <i>creation time</i><p>This is a fantastic example, because I actually had this problem w/ a non-RESTful API. We took a generic sort of "create record" and recorded the current time as the "start" time. The problem was when the device lost network connectivity (which was essentially always, just a matter of "how bad <i>is</i> the latency): the creation of the record would lag, sometimes by <i>hours</i>, and the eventual "creation time" it was tagged with was <i>wrong</i>. We should have trusted the device, because it knew much better than the server when the actual record was created.<p>Now, this isn't going to be the case all the time; sometimes you literally want just the time the DB INSERT statement happened, and that's fine.<p>> <i>Pick your poison, REST clearly has no clue what a read-only attribute it</i><p>POST a "create" request, GET has the newly created fields. Symmetric PUT/GET is nice, but show me where in HTTP it is recorded as an absolute.<p>> <i>Meanwhile, a GET is dangerously supposed to return the password (or credit card number)</i><p>Not only does REST not dictate this, nobody in their right mind should do this. GETs just return the resource. What that resource is, what data is represented — that's up to you in HTTP just as it is in RPC.<p>> <i>lots of resource parameters are deeply linked or mutually exclusive(ex. it’s either credit card OR paypal token, in a user’s billing info)</i><p>If your request looks like,<p><pre><code> "paypal_token": ...
"credit_card": ...
</code></pre>
Then your RPC would look like,<p><pre><code> struct PaymentDetails {
paypal_token: ...
credit_card: ...
}
updatePaymentDetails(..., new_details: PaymentDetails)
</code></pre>
and you're in the same hot water, again, just with RPC. If your type system allows it, you can make them mutually exclusive there, perhaps something like,<p><pre><code> enum PaymentDetails {
Paypal(token),
CreditCard(card_number),
}
</code></pre>
but then, that cleanly translates back into a RESTful API's information too. Now, JSON is typically used, and it doesn't really expose a real sum type, which is a shame. You can work around it w/ something like,<p><pre><code> "payment_details": {
"type": "paypal",
"token": ...,
}
</code></pre>
and it works well enough. If you can't express it in the type system (in <i>either</i> RPC or REST) then you have to do some validation, but that's true regardless of whether RPC or REST is in use.<p>> <i>you’d violate specs once more: PATCH is not supposed to send a bunch of fields to be overridden</i><p>…how is that a violation of the spec?<p>> <i>The PATCH method requests that a set of changes described in the request entity be applied to the resource</i><p>> <i>With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.</i><p>That's exactly what sending a subset of fields <i>is</i>. It's an adhoc new media type describing a set of changes.<p>> <i>So here you go again, take your paperboard and your coffee mug, you’ll have to specify how to express these instructions, and their semantics.</i><p>No more than you would an RPC call, AFAICT. E.g.,<p><pre><code> updateFoo(
field_a_new_value: Optional<...>,
field_b_new_value: Optional<...>,
)
</code></pre>
<i>etc.</i> is really no different.<p>> <i>OK, but I hope you don’t need to provide substantial context data; like a PDF scan of the termination request from the user.</i><p>If it's not a simple "delete this thing", that's fine. POST is still there.<p>> * For exemple, lots of developers use PUT to create a resource directly on its final URL (/myresourcebase/myresourceid), whereas the “good way” of doing it is to POST on a parent URL (/myresourcebase), and let the server indicate, with an HTTP redirection, the new resource’s URL.*<p>Either is fine.<p>> <i>Using “HTTP 401 Unauthorized” when a user doesn’t have access credentials to a third-party service sounds acceptable, doesn’t it? However, if an ajax call in your Safari browser gets this error code, it’ll startle your end customer with a very unexpected password prompt.</i><p><i>Only</i> if you accept Basic authentication, and indicate that in your headers. It is, I agree, somewhat unfortunate that the browsers do not let you control this behavior. I don't feel that this is an issue for many people these days. (If you're using something like a JWT, you're not going to hit this, since you'll likely be using something like Bearer for an authentication scheme.)<p>> <i>Or you’ll shamelessly return “HTTP 400 Bad Request” for all functional errors, and then invent your own clunky error format, with booleans, integer codes, slugs, and translated messages stuffed into an arbitrary payload.</i><p>Would you not need to stuff that information into some form of error or exception type in the RPC world? (The error "code" might be standardized, e.g., JSONRPC does this, but the associated data cannot be.) But unless you clearly fall into one of the predefined categories, and even if you <i>do</i>, it doesn't hurt to settle on a standard "error" type/format.<p>REST is about talking about the resources, about defining formats / objects that clearly indicate whatever state/data they represent. For most people, this is "just" going to be a JSON document, but all too often you see the same <i>concept</i> or state expressed five different ways in not-RESTful APIs. For example, a ZIP code that is sometimes just a string, sometimes a {"zip_code": "12345} sometimes a {"zip": 12345}; the author here is failing to understand that a common type exists, and that he should form an actual format around it that he can refer to globally (this field is a <i>zip</i> — and we defined that <i>here</i> and we know it always looks the same, always.)<p>Frankly, I feel like most of that issue is from JavaScript and JSON, since both discourage any form of static typing of data, and humans naturally just get the immediate job done, but at the cost of having the same concept expressed six different ways.<p>> <i>Or, well, yes, actually it remember its authentication session, its access permissions… but it’s stateless, nonetheless. Or more precisely, just as stateless as any HTTP-based protocol, like simple RPC mentioned previously.</i><p>This isn't what "stateless" refers to. It's <i>stateless</i> in that the requests are (generally speaking) independent of each other. One might need authentication, and you might have to get that somewhere, and that might need a request, yes. Store data of course changes the state of the server. But you can disconnect and reconnect and reissue that next request without caring.<p>> <i>The split of data between “resources”, each instance on its own endpoint, naturally leads to the N+1 Query problem.</i><p>It can, but again, doesn't need to, and is no more susceptible to this than an RPC API. In an RPC API, you still need to determine what to expose, and how many API calls and round trips that will require…<p>> <i>“The client does not need any prior knowledge of the service in order to use it”. This is by far my favourite quote. I’ve found it numerous times, under different forms, especially when the buzzword HATEOAS lurked around;</i><p>I feel like this was expressed by Fieldings to encapsulate two ideas:<p>* You can push code, on demand.<p>* You can refer to associated resources, or even state transitions, via hyperlinks.<p>Most supposedly (but not really) RESTful APIs do neither, so it's entirely a moot point. I think people <i>way</i> over read this to mean <i>absolutely</i> no prior knowledge, whereas Fieldings intended it to mean something closer to "only knowledge of the media types being returned", which is actually quite a bit of prior knowledge. But the two bullets above still allow you to encode a considerable amount of flexibility into an API, considerably more than something that ignores those points.<p>Frankly, I feel like if you start with an RPC protocol, you'll eventually want some stuff: caching. Retries would be nice, but you need to know if you <i>can</i> retry the operation. Metadata (headers). Partial requests. Pagination. Can you encode all of this with RPC protocols? Absolutely! But it comes up so often, it would be nice to standardize it, and many of these things (caching, pagination, range requests) get a lot easier if you stop talking about operations and start talking about the data (resources) being operated on, which is — to me — the big "ah ha!" of HTTP, and why HTTP is what it is today. That is, HTTP <i>is</i> a highly evolved RPC protocol. |
In Japan, small children take the subway and run errands alone (2015) | There's a great article on the topic (for USA) by Michael Ventura of Austin Chronicle:<p>A little clause set between commas in a missive from a reader about a recent column ["Hero vs. Superhero," July 25] -- in that piece I wondered about the effect of today's movie superheroes on little children as opposed to what children saw, say, in John Ford's 1956 The Searchers. The reader was intelligently critical of the piece, but in that clause he doubted that many 7-year-olds ever went to movies like The Searchers. A natural mistake -- he was clearly too young to remember, and how else would he know? But, for me, his error brought back a lost world.<p>In today's America, where parents chauffeur kids to "play dates," only on the poorest streets do 7-year-olds still roam free ... but they don't go to movies much because tickets are so pricey ... the concession stand is even more expensive ... and you can't just walk into any movie (it might not be rated for kids) ... and you have to know the exact time a film starts ... and shopping-mall movie theatres are rarely within walking distance. Today you're blitzed by TV ad campaigns and product tie-ins in fast-food joints, so you know all about a Hollywood film before it starts ... and today's urban parents panic if their grade-school children disappear, unaccounted for, for hours on end.<p>Fifty years ago, none of that was so. In the larger cities, two or three movie theatres were in walking distance of most neighborhoods. Each had but one screen. The program began with a newsreel, a few cartoons, and brief "coming attractions" (not today's compilations that tell the whole story). Then the "features" began -- plural, features, for all neighborhood theatres played double, sometimes triple, features, two or three movies for the price of one ticket. Nothing was rated, there were no sex scenes or obscenities; anyone could go to any movie. Admission price for a kid was rarely more than a quarter. Popcorn for a dime, a Coke for a nickel. They weren't supposed to sell kids tickets during school hours, but they did. As for kids roaming about -- "Go play in traffic," our parents would say, and they wouldn't be surprised if we didn't walk in 'til dinnertime, which in our immigrant neighborhood wasn't until after 7.
("Go play in traffic" wasn't so harsh a phrase as it sounds. Where else could we play?)<p>And you didn't go to a movie, you went to "the movies." You rarely knew the title of the film you were going to see until you saw the marquee -- and even then you might not recognize the title. Big productions were advertised on billboards, but there weren't so many billboards. No ads on the sides of buses, and none in supermarkets (and there weren't that many supermarkets). Second features were never advertised. TV ads for movies? Very rare in the early Fifties, and not so common by the end of the decade. Except for Davy Crockett's "coonskin" hats (and that was for television), massive product tie-ins were decades away. So adults and kids alike went to the movies, to see whatever was playing -- especially during the hot months, because in those days the big pull was to go to "an air-conditioned movie," as the phrase went. Into the early Sixties, movie theatres were among the only air-conditioned public buildings, and nothing was more rare for working-class people (on the East Coast anyway) than an air-conditioned residence. (I didn't live in one until I was 29.)<p>And there was this, a fact that can't be overestimated: Almost all movies (with Disney the major exception) were made for adults. Kids went to the movies, but few movies were calibrated for kids. Yet no ticket-seller I ever encountered thought it strange for a 7-year-old alone, or a group of three or four, to show up. I went every time I could scrounge the change, and I don't remember ever being turned away.<p>My birthday is late in October, so I was still 7 in 1953 when I saw my first film without "parental guidance" -- or parental presence. Frankly, it kind of shocks me to write that, for I can't imagine the parents of 7-year-olds today allowing their children to go to the movies alone. In fact, I doubt a lone 7-year-old would be sold a ticket now anywhere in this country. But once upon a time, it was no big deal. (All of which makes urban parents of 50 years ago sound permissive. They weren't. We would never have dreamed of speaking to our parents, or to any adult, as I now hear so many minutely supervised kids speak to theirs. Disrespect was not tolerated. Neither was whining. I know that sounds like an exaggeration. It's not.)<p>So, at the age of 7, three or four other urchins and I saw Vincent Price in House of Wax -- in 3-D, no less. It was deliciously scary in a harmless sort of way. But that same year I felt true horror at seeing (alone) The Robe; the Jesus I prayed to -- I watched him be crucified, watched the nails entering his hands, and it was among the more shattering experiences of my little life. And then, a different kind of shattering, that same year: The War of the Worlds -- the scene where the crazed mob throws the scientists out of their truck, destroys their work, and so (seemingly) ends all hope that mankind might survive the martian invasion. When later I became an obsessive reader of history, I had occasion to think of that scene many times.<p>In the spring of the next year, when I was 8, my 6-year-old cousin Tony and I "went to the movies," and what was playing was what I now know to be one of the rarest films by a major director and major star: William Wellman's The High and the Mighty, my first John Wayne film. Tony and I sat through it twice -- that is, we sat through The High and the Mighty, a second feature that I've forgotten, and The High and the Mighty again. For in those days once you bought a ticket you could sit there till the theatre closed. Also, since you just "went," you almost always walked in the middle of whatever picture it was, stayed through the second feature, and watched the first feature until the scene you walked in on. That's the origin of the saying, "This is where I came in" -- people would often leave at that point, with those words on their lips. Not me. I always stayed 'til the end, even if I didn't like the picture. I was (and am) stubborn that way.
The High and the Mighty was about a haunted, limping co-pilot (John Wayne) who'd survived the crash that killed his wife and child. He redeems himself by managing to land a (propeller-driven) airliner that otherwise would have been destroyed. Tony and I walked out whistling the haunting theme music. We went back the next day. Which was the first time I ever went to a movie. I can still whistle that theme, but I've never seen that film again -- to my knowledge it has yet to appear on TV, VHS, or DVD. What happened to it? In any case, I kind of fell in love with John Wayne. He was the man my 8-year-old wanted to be.<p>Other movies I saw solo or with buddy-urchins: The Blackboard Jungle, East of Eden, Rebel Without a Cause when I was 9 ... The Searchers when I was 10 ... age 11, I saw Edge of the City (the complex friendship between John Cassavetes and Sidney Poitier turned my young head around about race). Eleven, too, when I saw A Face in the Crowd and The Three Faces of Eve. Face left me absolutely stunned -- so much so that I couldn't stand to see the film again until I was well into my 40s. I know this will sound incredibly naive to a modern ear, but Face taught me that those smiling faces on TV were laughing at me. Baby, that changed me. I stopped believing a lot of things that year -- and stopped calling myself a Catholic. I didn't trust a priestly smile anymore. As for The Three Faces of Eve -- my mother had been in and out of mental hospitals, so Eve taught me more than I wanted to know, earlier than I could absorb it, and to this day I've not been able to watch it again.<p>All of which is to say ... there are good arguments for and against a child seeing such things, though I'm glad I did. Good arguments for and against children roaming dangerously, freely. Yes, disasters happened. You had to learn how to handle men who sat next to you and groped (that was rare, but once was plenty); I'd always grab an aisle seat, and I took to the trick of spilling Coke and popcorn on the seat next to me. But, as W.D. Snodgrass said when asked why he didn't write poems about A-bombs: "I've seen more people killed in their living rooms." It isn't much of an exaggeration to say "the movies," as an entity, raised me. It was like listening in to the conversations of the grownups. Which, we now forget, is how children have been raised for eons. So, reader, yes -- 7-year-olds did see The Searchers, in another world, and in a freer, riskier, more exciting, and inviting country.<p><a href="https://www.austinchronicle.com/columns/2003-08-22/174046/" rel="nofollow">https://www.austinchronicle.com/columns/2003-08-22/174046/</a> |
Ask HN: How can I become a self-taught software engineer? | Software development is a vast field, without countless options to explore later, far too many to give a good long-term plan at this early stage. As you say, what you need is a place to start.<p>There are many languages, each with their own pros and cons, but Python is a safe bet for your first one. It’s useful in its own right, and it’s also reasonably mainstream in its ideas and techniques, so you can learn a lot of transferrable programming skills. There are plenty of decent tutorials around both online or in book form.<p>If you’re planning to follow the self-teaching route, I suggest that the single most important thing in the early days is to just keep writing code. The theoretical foundations are useful, and important for a lot of professional work, but IMHO there is no faster or more reliable way to kill a new developer’s interest in the field than to bog them down with “doing things properly”. You have to <i>make stuff</i>, to experience to joy of programming and dispel the illusions about it being some sort of black magic. Everything else can wait, and this way when you do get to more advanced theory and more sophisticated tools and more carefully structured processes, you’ll have enough experience to understand why those things are helpful (or, in some cases, why they aren’t and you don’t need to get bogged down with them).<p>You have to start with the “Hello, world” stuff. This is trivial, but it takes the important step of writing and running real code on a real system, and therefore knowing that you have the tools you need available. Any tutorial is going to begin at that kind of level in the first hour. But then over the next few weeks and months, my recommendation is to try a few more substantial projects in different areas to see what you enjoy, and for a while, just learn by doing.<p>If you enjoy things like computer graphics and pretty pictures, and you’re comfortable with math, you could try something like rendering the famous Mandelbrot set, or generating a 3D landscape with mountains and lakes. As a novice, you can probably still write a program to do either of those things within a weekend with the help of online tutorials. This sort of exercise will give you a chance to exercise your basic programming skills, but it will also introduce you to essential ideas like how to use libraries of code that other people have written to do things like drawing on screen or reading data from a file or fetching data from an online source or countless other useful things.<p>If you’re interested in gaming, you could try something more interactive. Try implementing simple games like snake, 2048 or Tetris. Again, these are projects you can complete within a few days, following a path many have walked before so there are plenty of hints available with a bit of searching if you need an idea of how to get going. These will further exercise your basic skills and use of libraries to avoid reinventing the wheel, and in addition you’ll have to think about how to structure a non-trivial program that has some interactive input and output to deal with.<p>If you want to try some web development, you can write a simple database application. An address book is probably the classic exercise. You’ll need to learn about a few other tools in addition to the pure programming aspect in this case: you’ll need a server (even if it’s just your own development PC) running both web and database server software, you’ll need to figure out how your Python code fits in with those other programs, and of course you’ll need to know enough basic HTML and CSS to build a simple front-end. That’s quite a bit of non-programming knowledge, but on the other hand, again it’s something you can find your way around within a few days and the experience will be valuable if you ever wind up writing almost any sort of server software, whether it’s sitting behind a web site or not.<p>If you prefer native software rather than the web but want to try something more substantial, another classic exercise is to write yourself a simple text file editor. It doesn’t need to be anything flashy, but again there will be interactive elements and you’ll have to think about how to organise a slightly larger amount of code. You’ll also have to deal with things like reading and writing files, which brings with it the possibility of things going wrong and needing to recover from errors, as well as working with larger amounts of data, where the simplest and most obvious structures for that data don’t scale and you need a slightly more sophisticated approach. Again, these are all import general ideas that you’ll encounter time and again, and while of course you won’t yet know a lot of the more advanced techniques that real world software like this would use, you can start to develop an appreciation for <i>why</i> sometimes the simple and obvious approach isn’t effective enough and how more advanced techniques can be useful.<p>If you want something a bit more substantial that combines a few different ideas, you could try writing yourself a simple spreadsheet that can manage a big table, let you write some simple formulae to generate data in the cells (which introduces another important general programming skill, the ability to parse structured text input), and draw some simple charts of the data, as well as reviewing things like saving and loading data either using files if you do it as a native application or using a database if you do it as a web application.<p>If you try a few different projects like these then somewhere along the way you will also start to run into recurring questions about how to organise the structure of your code, how to investigate when things aren’t working properly, how to keep track of changes in your code over time, how to combine your own code with the tools provided by your programming language and/or libraries written by other people, where to find more detailed information about your languages and tools and how to navigate that information, and so on. At that point, you should be in a better place to start exploring more powerful tools and techniques on your own. Hopefully you’ll also be starting to get a feel for which areas of programming you do or don’t enjoy, and can branch out into other areas like mobile app development or machine learning or natural language processing or making big data visualisations whatever else takes your fancy.<p>Around that time, you might also like to experiment with additional programming languages. I would recommend trying at least one more that is quite similar to Python, so you can see how much of what you’ve learned transfers around between similar languages, and at least one that is very different, because there are vast areas of the programming landscape that Python doesn’t really address but other languages and tools do. For example, it would be hard to go wrong with learning a bit of C at that point, both because it’s also widely useful if you’re working closer to the metal (embedded software, operating systems, networking, device drivers, etc.) and because C is the <i>lingua franca</i> of the programming world and if you ever need to build larger systems that involve multiple programming languages then knowing C will almost always be useful for understanding the bridge between them.<p>Hmm... That turned out to be a much longer comment than I expected, but I guess it sums up my general advice to new, self-teaching programmers fairly well. Good luck! |
Why Don’t Americans Understand How Poor Their Lives Are? | For some context, I travel about 20% of the time for work. While the amount of that that is international varies from year to ear, it's usually the majority of said travel. I've been to every continent on the planet, except for Antartica. My opinion here is not formed by lack of exposure.<p>While I generally try to avoid talking like this on HN, I don't see how to avoid it here:<p>The author of this piece is a pretentious asshole. Basically every premise he argues is wrong or purely opinion based. Taking a quick look at his Medium post history, this sort of article seems to be his bread and butter of writing subjects, as well.<p>Some people have to work earlier than 9 or 10am in these countries. (Who does he think is making the trains run and brewing the coffee he drinks? Do they not count as people working?)
Plenty of people in the US don't have to work until 9 or 10. Plenty of people go home at 5 in the US.<p>US food tastes worse? Man, I've been to London. I've been to Berlin. Two of his examples. I've had a variety of food at each. Were their non-chain cafes and restaurants better than some of the US chains? Sure. Were they better than non-chain cafes and restaurants in the US? No. And the US has EVERYWHERE else beat on food diversity. Not a single city in Europe can hold a candle to the food diversity you get in New York City, Los Angeles, or even Houston. With average quality being similar at non-chains, and the absolute diversity being significantly better in the US compared to anywhere else in the world, the American diner that wants to eat more than Chile's or Applebee's or whatever is in a better position than the average diner in the UK. On the top end, in the Michelin world, I don't think there's a clear winner, but having eaten at a variety of 3 star places in the US and abroad, I can say it's comparable.<p>You can't watch documentaries on US TV? Completely false. You can't watch Swedish crime noir? Well, probably true. But we're also living in a post-cable world. Things like Netflix and other streaming services open up much more global access to media. And the idea that America is incapable of producing quality and interesting media is insane.<p>Fashion, art, etc? All somehow objectively worse in America? The suppositions here are just blatantly influenced by personal opinion.<p>I don't think America is the best at everything in the world. Basically everywhere I've spent time internationally - and some of that is a lot of time! - there are specific things that I think are done better than their American counterpart. But I can also point at things that are done worse. Even things that people laud a country for. People act like Germany has the best beer in the world - yet from an objective standpoint they have been largely at a creative standstill for more than a century. America is clearly at the forefront of craft beer, experimenting more, creating more, doing more and better than anywhere else. And beer might seem like a silly thing to measure anything by, but this whole article kicked off by talking about coffee, so whatever.<p>He pulls out one statistic that he can prove - life expectancy - and then lists a string of others stating that they are the same. Some of them are so vague as to be meaningless - America is statistically worse at "stress" - what does that mean? Others, you look, and sure enough, America isn't on top. But then you look at something else. Japan is #1 in Life Expectancy - but then press freedom, another one of the things the author is championing as an important measurement, Japan is significantly worse than the US. Or the UK is ranked basically the equivalent. Quality of democracy? The US ranks higher than portions of the EU, lower than some, but the absolute quality is so similar that the first 30 points in ranking are largely indistinguishable.<p>There are things wrong with America. There are very few things I can say that I think America is best at - though there are some!<p>But it's a pretty good place to live. And if you look at the balance of all of these important rankings, the US averages out pretty well, because while it isn't topping any of them, it'w consistently good to okay at worst, whereas some other countries that are exemplary of some of what he claims as important have crazy swings on their rankings and are terrible at others. The US ranks higher in happiness than the majority of the EU. It ranks lower in suicide rates. If we're going to look at specific sets of statistics, are these not important too?<p>If this article was "Why don't Americans understand that they're not the world leaders on every single aspect of life", it would make sense. The US could do with understanding that it is a world leader in very few things. That there is a lot of room for improvement. But the idea that life on average is so much worse in America than it is in the rest of the world is stupid. |
The smartest Bitcoin trade in town? | DECEMBER 7, 2017 By: Izabella Kaminska<p>Over the last week or so, we’ve recounted the problems with bitcoin’s market structure and how they are likely to impact the upcoming launch of bitcoin futures (here, here and here).<p>In the course of explaining the structural difficulties, we’ve pointed out how the capacity of market makers and bi-directional traders to support the product is crucial if bitcoin futures are ever to become a success. Currently, this is unlikely to happen because there is no easy way to play both sides of the market without taking on huge amounts of credit, fragmentation, illiquidity and hacker risk on the physical side.<p>As it stands, CFD and spread-betting houses are the ones mostly attempting to provide this bridging role. Problem is, even they are struggling to process the risk — and that’s despite being much less intensively supervised than the more established players who would usually be interested in servicing futures markets.<p>Some sort of risk-absorbing entity, as a consequence, must appear if retail and institutional participants (who are used to fiduciary standards) are to step into the market in size.<p>As a result, there are only three possible scenarios from here on in:<p>The futures (plagued by illiquidity and non convergence with the underlying) flop.
The lack of a market-maker redistributing one-sided risk back into the market will see the risk transferred elsewhere, most likely into the clearing house (to the risk of the entire trading community).
A less established player with a greater tolerance for risk — possibly a natural long — steps into the fray.
The third option doesn’t necessarily prevent the second option from playing out, however, given such an entity would still have to be serviced by the CME/CBOE clearing systems.<p>Nevertheless, let’s imagine such an entity exists. What would its game plan be? And why would it think it could handle the risk?<p>The easy answer to the second question is that it may have spotted an arbitrage it thinks could more than compensate for the risk at hand.<p>As to the game plan…<p>If you’re gunning to be the only entity in town prepared to sell bitcoin futures, it would be in your interests to start “pre-hedging” physical bitcoin as soon as possible with a view to locking in a risk-free basis return once the ability to sell futures on a regulated venue becomes possible.<p>Ideally, the trade would require an average purchasing price that’s much lower than the rate bitcoin futures would eventually be sold at. To maximise this trade, as much value would have to be ploughed into the purchase (or generation) of bitcoin ahead of time, as likely buyside demand for the futures once launched.<p>In terms of timing, due to bitcoin’s illiquidity, a “pre-hedging” position of this size would no doubt take time to put on. From that perspective it would make sense to start purchases as soon as a futures contract looked even remotely viable. FWIW, according to Factiva, the first serious clue CME was looking into a listing came in November 2016 when it launched a pair of indexes designed to track the virtual currency’s price. The CFTC’s decision in July to allow LedgerX to run a swap execution facility for bitcoin options, meanwhile, was a likely indicator a futures contract could be approved soon as well:<p>Nevertheless, no matter how strategically planned, pre-hedging of this size is always bound to leave a market footprint. (Especially in a market as illiquid as bitcoin.)<p>With that, the sort of self-fulfilling feedback loop that usually occurs when someone attempts to corner a market probably comes into motion.<p>For many, sparking a feedback loop of this kind is often deemed a trading objective in and of itself. Not for a bonafide smart operator. The smart money understands a good trade is as much about executing the “out” as it is about positioning the “in”. You can start a pump, but you can’t always profitably orchestrate the dump.<p>Hence why the futures component of the trade cannot be under emphasized.<p>Without the certainty of a properly regulated marketplace defending the futures side of the equation, a payoff cannot be guaranteed. It’s why the arbitrage exists in the first place: the trade cannot be exercised elsewhere in the bitcoin ecosystem because the risk of counterparty default at the first sign of distress is far too great. You’d win the trade, but you’d probably lose the payout.<p>If a regulated futures market takes that risk away, however, the trade becomes a no-brainer if not a mastermind opportunity.<p>Which is arguably where we are now.<p>Indeed — if such an entity does exist — only three possible risks that threaten the trade at this point:<p>The futures launch is suspended unexpectedly, perhaps due to regulatory concerns or industry pushback?
The anticipated futures demand never arrives because the margin costs of holding open long positions proves to be too great and/or liquidity is so shoddy nobody can trade effectively.
The cash settled return on the futures leg, won’t necessarily be matched by the price achieved on the physical liquidation.<p>One way or another, we will find out soon. |
How do you find your passion? Am I off course? | Passion is overrated. No, worse than that. All that advice about "finding your passion", etc. for work, is corrosive.<p>Oh, yes, there are people who love X, make a career out of it, and continue doing X for the rest of their life and are successful.<p>While there are people who love X, trying to make a living doing X, only to find it's not a viable career. Only, their "passion" makes them continue to do X far longer than they should, causing them to burn out.<p>There are all sorts of fields where the number of passionate people are far greater than the number of available jobs. Look at people who are passionate about dance, or sports, or horses, or art. Only a few of them can make a living from it. Yet everyone successful had to be passionate about it.<p>Now, get this. There are also people who start doing Y because a job is available, and find that, after a while, they enjoy doing Y. Some even find they are passionate about the job. Others use the money from Y so they can do what they are really passionate about as a hobby.<p>(And sometimes that hobby can, over time, become a career. I knew someone who was a manager of the grounds crew for the local cemetery, and a hobbyist woodworker. There was enough demand for he work that he now makes custom furniture for a living.)<p>The worst case in this scenario is to find that they really don't like Y, but nothing says you have to do Y for the rest of your life, and at least you have the money saved up to support you while you switch jobs. Otherwise, a lot of people do X for a living, and let the income fund their passion as a hobby.<p>(For another example, my dentist decided to stop his practice and be a musician. I think it was a mid-life crisis, but at least he had the money saved up for it.)<p>Here's another thing - most jobs aren't fun. Most jobs are boring. For that matter, David Graeber argues that most jobs are bullshit, that is (quoting <a href="https://www.strikemag.org/bullshit-jobs" rel="nofollow">https://www.strikemag.org/bullshit-jobs</a> ):<p>> The answer clearly isn't economic: it's moral and political. The ruling class has figured out that a happy and productive population with free time on their hands is a mortal danger (think of what started to happen when this even began to be approximated in the '60s). And, on the other hand, the feeling that work is a moral value in itself, and that anyone not willing to submit themselves to some kind of intense work discipline for most of their waking hours deserves nothing, is extraordinarily convenient for them.<p>One way to interpret the general call for "passion" in your job is that reinforces the cultural norm "to submit themselves to some kind of intense work discipline".<p>As for your specific situation, not all work environments are like Microsoft. There are companies which are more supportive of their employees. One friend of mine, with a mostly system adminstration job, worked for several programming-centered companies for a while, but it wasn't until he worked for a company doing light industrial work where he really enjoyed himself in his job. He liked that they were making actual things, rather than bits, and the owner/boss also did a great job of looking out for the employees.<p>If you're currently making $3000/month = $36K/year, then that doesn't give you time for the one thing you said that you are passionate about - international travel. While $50/hour is $100K/year. That's much more than is needed to support a travel hobby.<p>You might consider looking at how much vacation time you have for a given company, or negotiate for more unpaid time off instead of salary.<p>Or, since your in your 20s, it's pretty common to change jobs. Some people will work for a company for a year or two, then take 3 months off for travel, then work again.<p>Ignore what a recruiter thinks. Making you happy is far down the list of what they are supposed to be doing.<p>And drop this idea that you need to make your body "perfect." That's another corrosive attitude. Your body will never be perfect, making it all to easy to give up before doing what you should be doing, which is to get some exercise every day. It could be walking, dancing, going to the gym, cycling, swimming, martial arts ... something that gets you moving.<p>(Edit: To be clear, "exercise" doesn't have to mean a gym membership or something formal. At your age I mostly bicycled for local errands, like shopping. It helped that I was still in a college town. Later I walked my partner's dog for 30-45 minutes. Later I started salsa dancing, and go to the point where I was dancing 4-5 times per week. (A passion, perhaps, but not a good career choice.) Now I walk, and swim 30 minutes twice weekly. :) |
Opinions on functional programming and OCaml | I'm gonna have to defend vjeux here a bit. Disclaimer: I work on Reason (and help manage its community), a layer on top of OCaml, and targeted (well, cited) in the post. I've known vjeux for a while too. If he's accused of being a "junior engineer" then I don't know what the heck most of us are doing.<p>Lack of names: this deserves to be solved. OCaml in particular pays a bit more attention to it than others. Labeled arguments, inline records, objects, variant constructors, etc. are all solutions to this. Tuple's usually the target of criticism when it comes to lack of names, and I do think the criticism is mostly valid. The convenient-ness of a language feature dictates its usage. When you can trivially return an obj/map from js/clojure you wouldn't resort to returning an array of 2 elements. But when these alternatives are heavier in a language (ocaml objs are rather heavy visually and have no destructuring; records need upfront declaration), you do see a bit more of the proliferation of tuples. This can be solved, but since it's deemed an "uninteresting" problem, it stays as one of the low-hanging fruits. In general, though, I've come to appreciate OCaml's pragmatism regarding these matters. It does try to solve these language usability concerns. The other offender is parametric types, but the situation is the same in typed JS solutions.<p>Hard to track "mutations": tangentially related, but in parallel universe where FP is pervasive, I can see how folks might say "hey this pattern of passing a self-like argument first is used so often, we should create a new syntax for it (dot access) and optimize it specially for perf & tooling". Anyway, uniformity by definition erases distinguishability; sometimes the distinguishability is appreciated for e.g. perf and ease of grepping. Note how recent JS syntactical features are almost always faster than their polyfilled equivalent (obj spread, async, generator, arrow function).<p>Partial evaluation: the "monad" of beginner FP experience basically, in terms of social effects. From watching the community for so long, currying often seem to elicit a period of "this is weird -> oh I get it -> this is the greatest thing and I'll violently defend it against naysayers -> you know what, it's not all great; it's fine". Currying in an eager, side-effectful language is actually troublesome to implement & use. Some compilers don't optimize them, or worse don't even get them semantically right. Won't cite examples.<p>Higher-order functions: same problem in JS. But yeah, combined with partial app this isn't immediately clear: `Foo.bar(baz(qux))`. What's the arity of `baz`? More importantly, at what "stage/state" of the function's logic are we at now? For that specific example, you can argue that the name `x` isn't that much more indicative (which ties back to the first point). But these are the exceptions rather than the norm. I'm sure people are fine reading `map(foo)`. The general point's still valid.<p>===========<p>I'll stop here because I'm bored, but you can see how these things aren't black and white once everyone just take a deep breath, think a little, and find a way to communicate a bit more nicely with each other. In some of the above points, I'm playing the devil's advocate because I feel it's needed to balance the overwhelmingly negative sentiment. Sorry if my emotions come through a bit here, but it's a bit sad to see that that some of the less polite replies I've seen come from FP folks who actually barely started FP, through ReactJS/React Native, got overly excited to finally find a target to criticize in an act of catharsis, without realizing they're criticizing the co-author of said frameworks. Look, people are watching; disregarding whether the author's points are right, you'll be judged on how you react to them. And your collective reaction is a good assessment of how resilient the paradigm is against the real-world's sometimes nitpicky, sometimes serious, criticisms. The best engineers I've worked/am working with, are able to cite tradeoffs and admit that their paradigm isn't perfect. It's an indicator that you've finally "got it", that you're able to assess a subject's nuances rather than seeing it as a binary thing.<p>I'm a bit frustrated to see that vjeux's post had to be retracted and that he had to apologize. Imagine the potential improvements we could have collectively made had these issues not been casually/emotionally dismissed. Now once again the gist and this reply will be forgotten and we'll have to move on and count on word-of-mouth to propagate solutions to these criticisms rather than codify it somewhere like the programmers we should be. On the other hand, I am glad to see that most of the harsh replies don't come from the Reason community. Ultimately, I wish the community to learn to welcome newcomers, to learn _how_ to educate (and not just what), to understand FP's tradeoffs, to stay mature to get work done. |
One-on-one meetings are underrated, whereas group meetings waste time | This is micromanagement 101. (Sorry this turned into a bit of a wall of text.)<p>If you find one-on-one meetings more productive than group meetings, then you are terrible at running group meetings. Full stop, I don't even need to know how good this guy is at running meetings.<p>Let's walk through some of the benefits the author lists first...<p>1) Participates can speak without boring others. This is an indicator of two possibilities. One, you have the wrong people at this meeting - that's your fault as a manager (and probably as the meeting leader/facilitator). Or two, you are running the wrong kind of meeting - again, that's your fault as a manager. Meetings should be purposeful, and every participant should have a purpose in being there. You as a manager (and hopefully your team) should consider this before every meeting (trust me, it's easier to do that than hold several times more meetings).<p>2) People can speak without being interrupted. If you're letting people interrupt each other, you're bad at leading meetings. If you're leading a meeting, you're there to serve the purpose of the meeting, not to be everyone's friend. Again, that's your fault as a manager.<p>3) They can offer negative opinions about their coworkers. Whoa. Now we're in <i>really</i> bad management territory. How are you going to go about conflict resolution once this person has confided in you? How are you going to be a neutral party? Not only that but giving you that negative opinion didn't solve anything.<p>4) They can offer positive opinions about their coworkers. Haha. What. "The too obvious incentive of their co-worker hearing their praise." Jesus. Where do you work? This isn't even bad management, this just sounds like a horribly unhealthy organization.<p>5) "If a person is running late on a project, the best way to find out the reason is to have a one-on-one meeting with them." I mean did you have to have a one-on-one meeting to find this out? If you have people that are unwilling to take responsibility for <i>their</i> (yes, their, not <i>your</i>) projects, then you need to check your hiring practices.<p>"They are free to incriminate themselves, in ways that I find useful." Jesus Christ. Again, where the hell do you work? In ways that you find useful? "For instance, they might put all the blame on someone else. After the meeting I will investigate their accusations and discover the truth." Or, if you had done this in a group setting, they wouldn't have had that option and you wouldn't have to had to go play Sherlock Holmes with your employees. But, I guess you have the free time? "If I’d called a group meeting, and that other person was in the room, it’s unlikely that anyone would have told me the truth." Or you could hire people that take responsibility for their actions.<p>6) "What if someone finishes a project much faster than I expected, or with much higher quality than I was expecting?" "It would be awkward to try to have these conversations while other people are in the room." I mean, maybe. They might have questions too, or be interested in the answers. Again, if you take the time to actually think about the purpose of the meeting you want to have you won't run into this problem. There is a time and a place for one-on-one meetings.<p>Now, let's look at some of the things he dislikes about group meetings now...<p>"As it was, during the typical meeting we had 15 people in the room, most of whom were bored." Whoa. What. 15 people!? That's twice as many people as you should probably ever have in a meeting.<p>"So who is right, and who is wrong?" Didn't see a whole lot of managing in that conversation. Do your meetings even have leads/facilitators? Who is holding the participates accountable for the purpose of the meeting?<p>"You’ll need to figure this out, but you don’t need to do so while 12 other people are in the room. If you are the manager who is overseeing this, it is up to you to get people back to work." Then do that, get the meeting back on track.<p>"A great manager doesn’t allow such debates to exist, because they don’t hold the kinds of meetings where this behavior is possible." WHOA. Nope. This is about as bad as management can get. Debates are healthy, conflict isn't bad. How else are you (and your organization) going to learn and grow?<p>There's a lot in there about "client" meetings, but not so much about internal meetings. Why is that? Sounds like the author is shifting the blame... I like to have managers that take responsibility for their actions...<p>So, what's the real conclusion of this article? Aside from there clearly being some "client" issues. One-on-one meetings are easy. Here's why:<p>1) They are easy to lead. Leading one is easier than leading five.<p>2) They avoid, more or less, all conflict. Even the productive kind. You can be everyone's friend.<p>3) They mean you (as a manager) don't have to think about who should attend.<p>4) Everything on your team has to go through you (I believe this is also called micromanagement).<p>They also are far less efficient and effective, if you know how to lead a group meeting. |
Reversible Computing (2016) [video] | Norman Margolus described some of his ideas about simulating physics, programming lattice gasses, parameterizing reversible rules, playing and learning with interactive CA:<p>>In your todo list, my favorites are the features that make the play more interactive, such as tools for constructing initial states, and anything that makes it easier for users to define their own CA rules. For interesting rules to play with, most of my favorites are in the paper Crystalline Computation , though I do have some additional favorites since that paper was written.<p>Crystalline Computation:
<a href="http://people.csail.mit.edu/nhm/cc.pdf" rel="nofollow">http://people.csail.mit.edu/nhm/cc.pdf</a><p>>The rules that Tom and I are most attached to are the ones that are closest to physics, and those are mostly reversible lattice gases (or their partitioning equivalents). I would also say that lattice gases provide the best model both for simulation efficiency using machines with RAM, and the simplest and most flexible mental model for constructing interesting local dynamics. This may be a bit of a distraction for you, but my best understanding of how to efficiently simulate lattice gases is discussed in the paper, An Embedded DRAM Architecture for Large-Scale Spatial-Lattice Computations . I believe the technique discussed there for embedding spatial lattices into blocks of DRAM is also the best for efficient simulation on PC's.<p>An Embedded DRAM Architecture for Large-Scale Spatial-Lattice Computations:
<a href="http://people.csail.mit.edu/nhm/isca.pdf" rel="nofollow">http://people.csail.mit.edu/nhm/isca.pdf</a><p>>My perspective is that CA's (and particularly lattice gases) are a modern and accessible version of classical field theories. Physicists have long used them as toy models to understand real physical phenomena, but their importance goes much deeper than that. It doesn't seem to be widely appreciated that most of the quantum revolution consisted of exposing the finite state nature of Nature! We looked closely at what we thought was a continuous world and we saw ... pixels! There is also some inscrutable screwiness that came along with that, but if you focus on just the finite-state part of the new physics, reversible lattice gases should be considered fundamental theoretical models. To quote from the end of my "Crystalline Computation" paper,<p>>"Although people have often studied CA’s as abstract mathematical systems completely divorced from nature, ultimately it is their connections to physics that make them so interesting. We can use them to try to understand our world better, to try to do computations better—or we can simply delight in the creation of our own toy universes. As we sit in front of our computer screens, watching to see what happens next, we never really know what new tricks our CA’s may come up with. It is really an exploration of new worlds—live television from other universes. Working with CA’s, anyone can experience the joy of building simple models and the thrill of discovering something new about the dynamics of information. We can all be theoretical physicists."<p>>So I think it's important here that people can explore new rules, not just play with initial states. New rules create new universes that no one has ever seen before. As a way to make this more accessible for casual users, we could provide some parameterized reversible rules, where many values of the parameters give interesting universes, and let them explore. That's a bit more advanced than just playing with simulations from the CAM-6 book, but people will want to do both. The actual CAM-6 machine was very much a universe-exploration device, with a specialized language that let you write rules easily, and pressing single keys during a simulation let you start and stop the dynamics and run backwards and zoom in on the middle, and shift the entire space (arrow keys) and change the middle pixel or insert a small block of randomness in the middle, and "stain" the rendering in different ways to bring out different features. The actual hardware was lookup-table based, so all rules ran at the same speed. Each experiment defined some extra keys of its own (Capital letters were reserved for the experimenter, lower case and non-letters for the system. Adding "alias X <comment>" after a function definition attached it to the X key. Numbers pressed before pressing X would be available to the function as a parameter. Pressing "m" gave a menu of the keys the experimenter defined, and their <comment>s). Tommaso thought of it as being like having various keys and stops on an organ that let you make music. A GUI-centric interface might be better today, though single keys still have the advantage that you don't have to take your eyes off the simulation. Have you ever played with the hardware CAM-6? If you can find a vintage 286 or 386 machine, you could plug one in (I still have some somewhere) and play a bit, to get a better feeling for the spirit of it. I also still have a working CAM-8, but I think 3D lattice-gas simulations are out of scope for the moment. If, however, you can implement 2D lookup-table lattice-gas computation efficiently in JavaScript, that might be a good way to emulate the CAM-6 experiments, and make general lattice gases easy and efficient to play with. We defined and implemented a bunch of the CAM-6 experiments as very simple lattice gas experiments on CAM-8.<p>>As you can see, my personal inclination would be to emphasize the reversible stuff in the CAM-6 book, and the Crystalline Computation paper. And tell people they are being theoretical physicists! |
Wi-Fi Alliance introduces security enhancements | > WPA2 provides reliable security used in billions of Wi-Fi devices every day.<p>What do you mean, "reliable", the Wi-Fi Alliance? I get it: If DEAUTH attack has been fixed by deploying 802.11w[0], provide the randomness is generated by a proper CSPRNG (unlike the stupid one from the appendix of the official spec!) with a high-quality entropy source (not system uptime in milliseconds!)[1], and that bruteforce-friendly WPS ("Protected" Setup) has been deactivated, finally KRACK attack has been patched from the both sides? In addition don't forget to disable TKIP and force CCMP[2]?<p>This is another reason why (correctly-deployed) HTTPS must be pushed forward. Just imagine these Wireless Internet of Shit.<p>Despite the PR and marketing blurbs in the article, I hope it's a good news and I sincerely hope the engineers and designers have a good insight of what's going on around cutting-edge modern cryptography this time, and also understand my points below. Hopefully things will go better.<p>Even without these flaws, WPA's crypto is far from ideal under a modern perspective.<p>The lack of Forward Secrecy is the greatest blunder of the protocol. Anyone can record all the data packets then decrypt them later if the passphrase is discovered. Designers at IEEE did realized the problem and they came up with an unnecessarily complicated handshake protocol to derive a session key every time you connect to an AP. But the handshake itself is ultimately still protected by the single passphrase! Coincidentally, the DEAUTH loophole enables attackers to pretend being the AP, kicking everyone out of the network and force them to redo handshakes for your capture and wiretap immediately (in case you never have a chance to get previous ones), combo!<p>Hence the WPA-"Personal" is crap so you go to WPA-"Enterprise", precisely WPA-EPA, which is extremely complicated to configure with little benefits in terms of security (major advantages are accountability, authorization, etc though), as most OSes don't even check the root certificate by default. The only workaround I could recommended to people who need serious security over Wi-Fi, is to leave the existing WPA untouched and untrust, then installing a solid VPN on the gateway and require everyone to connect for LAN/WAN access.<p>Real Solution is simple, exactly the same as how we fixed Transport Layer Security and HTTPS:
Diffie-Hellman Key Exchange.<p>The absence of DH may be justified by the low computational power in the early 2000s, but it can't be an excuse anymore. Just perform a DH before you transmit any real data, can't be easier, and no unnecessarily complicated handshake protocol required. You might still say, well, what about the Man-in-the-Middle? Sure, but cracking Wi-Fi would not be as simple as capturing packets and typing the passphrase in WireShark, period. Also works for public networks without passwords.<p>In fact, interestingly, WPS REALLY DOES PERFORM THE DH to protect the main passphrase, in other words, for every single PIN-code you try to bruteforce, there's a DH exchange... The world is full of ironies.<p>Even better, because a pre-shared key is already presented (the Wi-Fi "password"), the key itself can be used (and should only be used) to authenticate the DH, MITM problem solved, no user-experience changed, no internal PKI/CA needed, but strong cryptography, forward secrecy, and the only pre-requirement is a strong passphrase, which can be generated by EFF's Diceware[3] for all security-minded users.<p>Wi-Fi, secured for the first time.<p>It can be even better if the Diffie-Hellman is based on good crypto, something like X25519, with authenticated encryption like ChaCha20-Poly1305, with a modern key derivation function like Argon2 to stretch the passphrase to stronger key and avoid bruceforce attacks.<p>And you can always use lightweight-equivalents (but not low-security version) to these cutting-edge algorithms if hardware constraints is an issue.<p>BTW, this is what "ECDHE_PSK Cipher Suites"[4] in TLS is about: forward secrecy, high security with only a passphrase and no certificate, for your HTTPS, etc, it should be easy to use and widely used. But unfortunately nobody ever noticed them and LibreSSL even removed them due to the lack of interest and maintenance burden (contributes to security problems in the end). In fact you can just make WPA-EPA's TLS to use a ECDHE_PSK ciphersuit, and you can have "Enterprise" security with only a password, but nobody support this ciphersuit anyway...<p>[0] <a href="https://en.wikipedia.org/wiki/Wi-Fi_deauthentication_attack" rel="nofollow">https://en.wikipedia.org/wiki/Wi-Fi_deauthentication_attack</a><p><pre><code> https://en.wikipedia.org/wiki/IEEE_802.11w
https://github.com/DanMcInerney/wifijammer
</code></pre>
Basically, the 802.11w defense is undeployable. Most clients and APs still have no support. Even if you control all the devices, 802.11w needs to be implemented by the device driver, but what a driver can do depends on the on-chip firmware, which has no support of it! And you know the firmware is proprietary and you can't do anything with it.<p>If my memory is right, only two or three Wi-Fi families have 802.11w on Linux, and for hardware tinkers you are not disappointed, just as expected, it works best with ath9k, which is a family with official involvement in mainline driver development, and no on-chip firmware is used at all.<p>You can enable 802.11w on LEDE if a supported chip is used, RTFM for instructions. You should only use "Optional", not "Mandatory", because of the above reasons.<p>[1] Predicting, Decrypting, and Abusing WPA2/802.11 Group Keys<p><pre><code> https://media.ccc.de/v/33c3-8195-predicting_and_abusing_wpa2_802_11_group_keys (with Video)
https://news.ycombinator.com/item?id=15478750
</code></pre>
If you are using a Linux-based solution with newer kernels, like OpenWrt/LEDE, it's completely safe from these attacks, thanks to the improvements of /dev/random in newer kernel and kudos to hostapd's developers for using the correct randomness from the beginning.<p>However, if the device uses a proprietary implementation, even w/ Linux, it's likely a wrong one, accounts for 30% of the market share I guess... (and what, you said firmware updates?)<p>[2] All Your Biases Belong To Us: Breaking RC4 in WPA-TKIP and TLS<p><pre><code> https://www.usenix.org/conference/usenixsecurity15/technical-sessions/presentation/vanhoef
https://www.youtube.com/watch?v=Et8E1Y9c2GM
</code></pre>
RC4, that's enough said.<p>[3] EFF's New Wordlists for Random Passphrases<p><pre><code> https://www.eff.org/deeplinks/2016/07/new-wordlists-random-passphrases
</code></pre>
[4] ECDHE_PSK Cipher Suites for Transport Layer Security (TLS)<p><pre><code> https://tools.ietf.org/html/rfc5489</code></pre> |
Ask HN: Best practice for estimating development tasks? | I've spent the last few weeks condensing years of experience and many successful agile teams process into a concise and holistic guide which can be found here: <a href="https://agileforleads.com/" rel="nofollow">https://agileforleads.com/</a><p>Here is my verbatim section entitled "8.3 Estimating a Story". Be sure to take take this in the context of the whole process and tooling, which is covered in detail in the ebook <a href="https://agileforleads.com/" rel="nofollow">https://agileforleads.com/</a><p>><p>Remember how I said that if you only take one action from this course to your team it should be the Retro? Well, the second most critical action to your #agileforleads process is the Estimation; specifically consistent and repeatable estimation. Sometimes this is called Backlog Grooming or Backlog Refinement. Those terms don't work for me because they suggest that it's kinda optional, and for us, it isn't. Other teams call it "pointing [a story]" and you'll see why below.<p>Not every Agile process will be scientific. Many of the common agile patterns don't need to be. Scrum practitioners will generally apply the following but, as usual, I'm going to refine it a little more for you so that it's not only easier to use, but substantially more valuable.<p>Don't get stressed or hung up on being careful to ensure that you pull off estimation activities "by the book" because I've said it's so important. Just carry these tips with you and try to throw a new one in each time and eventually, over time, you'll find a consistent groove.<p>Being able to reliably predict future delivery rates based on historic delivery rates requires control. Controlling the inputs ensures more accuracy for the outputs. It's just science.<p>The rules for our estimation are the following:<p>* We use "points" instead of hours or person-days. Our points are a Fibonacci sub-set: 1, 2, 3, 5, 8 and "too big".<p>* We use Planning Poker for "throwing down a number" and we use our hands to do it. Avoid apps and playing cards if possible. Like the Retro voting, we don't reveal votes until everyone has silently selected a number. Everyone reveals together.<p>* When there is only a single size difference (eg. min is 3 and max is 5) we "go large" (select the 5).<p>* Use all 10 fingers to indicate "too big" and as early as possible, break these down into 2 or more smaller stories.<p>* When there is wider deviation, a "low ball" team member gives reasons for their low vote and the "high ball" member explains theirs too. Then discuss.<p>* Limit discussions (strictly 2 mins if you have a lot of stories to estimate; use a stopwatch) and invite a second vote to zoom in on a number. Two, maybe three, rounds of voting should be enough to get to the same number or a single size difference. If not, split the story or create a Chore to investigate the requirements further.<p>* Log blocking questions for the PO, update the AC/this is done when as you better understand the story, and add notes/reference links to the description while you're discussing the work.<p>* Log the agreed estimate/points number at the end and quickly move on to the next story.<p>Have a look at the following scenario and then after, I'll unpack some of the details.<p>How-to: Simple Story Estimation using your hands<p>Team estimation need not be a formal ritual. Two developers at a desk is enough. On other occasions where you want to move quickly through many stories (you've just cut up a large block of work and the PO wants a date), booking a meeting room with a bigger selection of your team gives everyone focus, and increases the accuracy of estimation.<p>1. Gather team members together.<p>2. Open up Tracker and expand the story detail (so the title, AC and any
attachments/images are clearly visible) so everyone can see it.<p>3. Invite someone to read aloud the story and it's AC. Show larger views of attachments. Someone could summarise or add a little more detail verbally if that helps.<p>4. Ask members whether the story is clear enough on it's own for everyone to "pick a number" (silently). If not, ask and answer questions for a couple of mins and log questions down for the PO that the team can't resolve on their own.<p>5. When members "have a number", ask everyone to hold up closed fists and count backwards from 3 to reveal votes simultaneously. "3. 2. 1. go!"
Single finger for 1 , 2 fingers for 2 , etc, 8 fingers for 8 and all fingers for too big .<p>6. Jen votes 5, Alex 2, Kate 3 and James 2. Since she's high, the facilitator asks Jen "why the 5?"<p>7. Jen, who got caught up seeing the css and js note explained a concern which the other team members quickly identified as confusion. Once the issue was clarified Jen exclaims "Oh! Sorry, I'm a 2 then as well".<p>8. Another member asks "So go large then?"<p>9. And the rest of the team agree. The facilitator logs 3 as the estimate and open up
the next story.<p>I know this example doesn't cover all the scenarios and variables but I'll try to cover many questions I hear frequently and some of the more common variations.<p>* Let's start with story points.
Developers are notorious at estimating very inaccurately when it comes to hours and days. Just ask any project manager. It's also a recipe for bias too - I want to show my team and myself that I have a handle on something and I can do it (quickly), so I'm mentally incentivised to offer a smaller than reasonable number. Most developers often estimate by a factor of 0.5 or smaller when using linear time.
Points push all that to one side and free a technical team member to think more clearly. We usually say that points are in terms of "relative complexity or effort". Complexity works for some personality types. Effort for others. The other thing to note is that a 2 is not necessarily twice as much as a 1. Think of 1, 2, 3, 5 and 8 simply as containers. 5 is the "middle of the road" container and 1 is the smallest thing the team will ever do. In that "1 container" might go a one line change that takes 10 mins or a re-styling effort of 3 hours. But it's the smallest so the smallest items of relatively the same size get a 1.
8 on the other hand is absolutely the biggest thing any member of the team wants to commit to in an iteration. If your iterations are 1 week and Bob thinks the story can be done in a week but Sarah thinks longer, then it's more than an 8, and should be broken down.<p>* Less tools is more.
Avoid the Planning Poker cards and apps and just stick to your hands. This also influenced the selection of Fibonacci numbers.
We don't use the size 0 because it's somewhat confusing and dangerous (like a Chore - more later).
We don't use more numbers because you get diminishing returns by increasing the resolution of estimates too much further. Tracker also offers "t shirt sizes" S, M and L which I've found helpful for planning heaps of work up front very quickly but not particularly practical for day to day work or new teams to this process.<p>* At all times, remember: we're just estimating.
I'm preaching to myself here. I often have to remind myself that these are all just estimates. It helps you relax a little and not apply as much pressure to yourself (and your team mates) and sometimes get to a number quicker.<p>* Use the Done column to reset your base of reference.
I didn't include this in the narrative above because if you're estimating regularly (every 2-3 days), each team member subconsciously knows the relative scale of each point container. But on more occasions than not, it's helpful to pull up the Done column in Tracker and refresh the team on what each size means, using recently completed work:
Scroll to the bottom of the Done column (most recent completed iteration) and work back until you find 1 or two of the various sizes that the team is debating match the container for the current story being estimated. This is another mechanism to ensure consistency over time for the estimation inputs. You're essentially reminding each team member "what a 3 looks like" or "what a 5 looks like" prior to each silently forming their estimate. This historic reference is vital in ensuring that work of the same relative size receives the same estimate over and over again.<p>* PO does not vote.
After a story is defined, the developers usually estimate it. Sometimes other members of the build team can help but this activity must absolutely exclude the PO. The PO has a conflict of interest when it comes to estimating how big a story is, so they need to stay out. If your PO can sit in on the discussion to answer questions and further clarify, then that can be helpful but I've found they often influence the estimate, sometimes simply by being present. If you can find a way to meet with the PO and clarify the AC separately and estimate without the PO, I think you will have more success. Discuss in your Retro if you have issues.<p>This, like many practices, simply gets better over time. It's ok for it to be little slow at first. Like commits, stories, iterations and release phases, the smaller you can batch things, the faster you'll cycle and quicker you and your team will learn. |
What Really Happened with Vista: An Insider's Retrospective | It's fun reading these retrospectives, especially having lived somewhat close to them. I was as much a full time developer back during the NT to Vista days as I am, today. Most of my development career centered around Windows technologies. I got my start, in the professional sense, writing apps in C++/(cough)VB and quickly moved on to C# -- which I still really love. It was both interesting and sad to watch Microsoft during these days. I'll never forget the SCO(/IBM/Microsoft) vs. Linux(/TheGPL/etc) days. Such a strange time to think back on Microsoft hating on Open Source ... or trying to create a competitor in the MSPL/CodePlex mess. It was almost comical to see a company that thrived in a world of "general purpose tools to solve many computing problems" (a.k.a. Office)[0] live in their own NIH world.<p>It was a hard time as a developer to feel proud of being part of that world. I saw a company that I admired behaving idiotically and under poor management[1]. I <i>loved</i> developing in C# (especially after .Net 2.0 with Generics), but I was generally unsatisfied with my operating system (Windows XP and later Windows 7 ... my company never went Vista). Because of the massive numbers of competing factions that this author points out, Windows tended to get everything but the kitchen sink and managed to do so in a sub-par manner. Oh, how I loved, killing the file/URI associations to Internet Explorer and Windows Media Player on my own PC in favor of Firefox and VLC only to have to make sure that all of the crap I wrote played well with both of those if they applied to the project[2]. The classic Bill Gates rant about Windows Movie Maker summed up that time pretty well[3].<p>The last few years under Nadella have been something special to me. Windows 10, despite its flaws/spyware bordering on malware, is enjoyable to use. I have bash (well, zsh, in my case) that functions far better than cygwin/msys ever did, PowerShell -- despite my general hate for its syntax and performance -- can basically handle any scripting task I need to throw at it and is <i>far</i> superior to <i>cmd</i>. Heck, in Windows 10, I can even have a True Color experience in conhost. Visual Studio Code, unlike Visual Studio, is an editor I actually go out of my <i>way</i> to use (as do all of my Linux-only coworkers). Microsoft is developing .NET out in the open and has embraced open-source in a way that a decade ago would have been met with suspicion related to "Embrace and Extend". And when I mention that I write code in C#, I don't have to listen to Java/other folks yelling at me about how <i>evil</i> Microsoft is.<p>And then there's my home-computing life. Despite my previous statements about Windows 10, some of my love for Ubuntu on Windows actually turned me away from Windows. I'd always run Linux at home. All of my servers are bare-metal Linux with one Windows host in a KVM virtual. I started <i>loathing</i> any time I had to do something that took me out of the tooling I use in the Linux world. Just try fighting with gpg-agent on Windows to get SSH key-auth working using the vanilla command-line SSH. It sucks and PuTTY/Pageant is far from a suitable alternative. Jumping and moving things among hosts via SSH, shell and just generally interacting with Linux is far less painful than Windows. So when I got my new laptop this Christmas, I had no intention of ever running Windows 10 on it. I actually booted it and it hung on "set up Windows Hello". I'm not even sure why I bothered to boot it to the hard drive in the first place. This is the first time in a decade of running Linux that I'm actually running it full-time on the machine I spend the (second) most time on. And I discovered, much to my surprise, Windows 10 <i>flies</i> in a KVM virtual with virtio drivers set up in the relevant places. Any time I'd tried to run any Linux variant in Hyper-V and actually <i>use</i> KDE, Gnome or any graphical interface, I gave up after swearing a bit. No clipboard support, and performance was unacceptably laggy. The other way around, I can barely tell I'm in a virtual (I toss the console off to Desktop 2 and I can flip between Windows and Linux) all without doing fancy things like GPU pass-through. And I really only have it there so I can compile a program with MSVC and test the Windows side of it. At this point I don't <i>ever</i> see going back (or switching to fruit). I love <i>endless</i> configurability (even if it comes at the cost of things not being close to "quite right" at the start) and having a mountain of good, free, choices at my disposal. And that work computer is getting reloaded next month when I'm done with the project I'm working on and have a free moment to do so.<p>[0] The bane of my existence being Excel and Access -- which employees at the company I worked at at the time used to solve incredibly complicated problems, incorrectly, and then panicked when their drive failed or the Access database become corrupted to the point of no return and we discovered how many hundreds of thousands of dollars was about to be lost due to worker creativity.<p>[1] Ballmer less so than Gates. That's my opinion, which I would normally be happy to back up, but I'm leaving it alone, today.<p>[2] Don't use that particular CSS, or worse, mangle it in this manner in case of IE 5/6. And why do we have this massive extra video file, oh yeah, that's the best format we can encode it in to guarantee playback on stock Windows without additional codecs installed -- because expecting a codec installation to work across all PCs always ends in tears.<p>[3] I'm not actually sure if that's urban legend or not and don't care to dig into finding out -- even if it is, and wasn't spoken by The Bill, every statement made described the mess that had become Windows at that time pretty well. |
Black Protest Has Lost Its Power | By Shelby Steele<p>The recent protests by black players in the National Football League were rather sad for their fruitlessness. They may point to the end of an era for black America, and for the country generally—an era in which protest has been the primary means of black advancement in American life.<p>There was a forced and unconvincing solemnity on the faces of these players as they refused to stand for the national anthem. They seemed more dutiful than passionate, as if they were mimicking the courage of earlier black athletes who had protested: Tommie Smith and John Carlos, fists in the air at the 1968 Olympics; Muhammad Ali, fearlessly raging against the Vietnam War; Jackie Robinson, defiantly running the bases in the face of racist taunts. The NFL protesters seemed to hope for a little ennoblement by association.<p>And protest has long been an ennobling tradition in black American life. From the Montgomery bus boycott to the march on Selma, from lunch-counter sit-ins and Freedom Rides to the 1963 March on Washington, only protest could open the way to freedom and the acknowledgment of full humanity. So it was a high calling in black life. It required great sacrifice and entailed great risk. Martin Luther King Jr. , the archetypal black protester, made his sacrifices, ennobled all of America, and was then shot dead.<p>For the NFL players there was no real sacrifice, no risk and no achievement. Still, in black America there remains a great reverence for protest. Through protest—especially in the 1950s and ’60s—we, as a people, touched greatness. Protest, not immigration, was our way into the American Dream. Freedom in this country had always been relative to race, and it was black protest that made freedom an absolute.<p>It is not surprising, then, that these black football players would don the mantle of protest. The surprise was that it didn’t work. They had misread the historic moment. They were not speaking truth to power. Rather, they were figures of pathos, mindlessly loyal to a black identity that had run its course.<p>What they missed is a simple truth that is both obvious and unutterable: The oppression of black people is over with. This is politically incorrect news, but it is true nonetheless. We blacks are, today, a free people. It is as if freedom sneaked up and caught us by surprise.<p>Of course this does not mean there is no racism left in American life. Racism is endemic to the human condition, just as stupidity is. We will always have to be on guard against it. But now it is recognized as a scourge, as the crowning immorality of our age and our history.<p>Protest always tries to make a point. But what happens when that point already has been made—when, in this case, racism has become anathema and freedom has expanded?<p>What happened was that black America was confronted with a new problem: the shock of freedom. This is what replaced racism as our primary difficulty. Blacks had survived every form of human debasement with ingenuity, self-reliance, a deep and ironic humor, a capacity for self-reinvention and a heroic fortitude. But we had no experience of wide-open freedom.<p>Watch out that you get what you ask for, the saying goes. Freedom came to blacks with an overlay of cruelty because it meant we had to look at ourselves without the excuse of oppression. Four centuries of dehumanization had left us underdeveloped in many ways, and within the world’s most highly developed society. When freedom expanded, we became more accountable for that underdevelopment. So freedom put blacks at risk of being judged inferior, the very libel that had always been used against us.<p>To hear, for example, that more than 4,000 people were shot in Chicago in 2016 embarrasses us because this level of largely black-on-black crime cannot be blamed simply on white racism.<p>We can say that past oppression left us unprepared for freedom. This is certainly true. But it is no consolation. Freedom is just freedom. It is a condition, not an agent of change. It does not develop or uplift those who win it. Freedom holds us accountable no matter the disadvantages we inherit from the past. The tragedy in Chicago—rightly or wrongly—reflects on black America.<p>That’s why, in the face of freedom’s unsparing judgmentalism, we reflexively claim that freedom is a lie. We conjure elaborate narratives that give white racism new life in the present: “systemic” and “structural” racism, racist “microaggressions,” “white privilege,” and so on. All these narratives insist that blacks are still victims of racism, and that freedom’s accountability is an injustice.<p>We end up giving victimization the charisma of black authenticity. Suffering, poverty and underdevelopment are the things that make you “truly black.” Success and achievement throw your authenticity into question.<p>The NFL protests were not really about injustice. Instead such protests are usually genuflections to today’s victim-focused black identity. Protest is the action arm of this identity. It is not seeking a new and better world; it merely wants documentation that the old racist world still exists. It wants an excuse.<p>For any formerly oppressed group, there will be an expectation that the past will somehow be an excuse for difficulties in the present. This is the expectation behind the NFL protests and the many protests of groups like Black Lives Matter. The near-hysteria around the deaths of Trayvon Martin, Michael Brown, Freddie Gray and others is also a hunger for the excuse of racial victimization, a determination to keep it alive. To a degree, black America’s self-esteem is invested in the illusion that we live under a cloud of continuing injustice.<p>When you don’t know how to go forward, you never just sit there; you go backward into what you know, into what is familiar and comfortable and, most of all, exonerating. You rebuild in your own mind the oppression that is fading from the world. And you feel this abstract, fabricated oppression as if it were your personal truth, the truth around which your character is formed. Watching the antics of Black Lives Matter is like watching people literally aspiring to black victimization, longing for it as for a consummation.<p>But the NFL protests may be a harbinger of change. They elicited considerable resentment. There have been counterprotests. TV viewership has gone down. Ticket sales have dropped. What is remarkable about this response is that it may foretell a new fearlessness in white America—a new willingness in whites (and blacks outside the victim-focused identity) to say to blacks what they really think and feel, to judge blacks fairly by standards that are universal.<p>We blacks have lived in a bubble since the 1960s because whites have been deferential for fear of being seen as racist. The NFL protests reveal the fundamental obsolescence—for both blacks and whites—of a victim-focused approach to racial inequality. It causes whites to retreat into deference and blacks to become nothing more than victims. It makes engaging as human beings and as citizens impermissible, a betrayal of the sacred group identity. Black victimization is not much with us any more as a reality, but it remains all too powerful as a hegemony.<p>Mr. Steele, a senior fellow at Stanford University’s Hoover Institution, is author of “Shame: How America’s Past Sins Have Polarized Our Country” (Basic Books, 2015).<p>Appeared in the January 13, 2018, print edition. |
How America Fractured in 1968 | I must be alone in thinking that times are pretty good and the US is not fracturing, and today is nothing like the late 1960s or 70s.<p>Crime/violence:<p>Both violent and property crime rates are way down [1].<p>My home region of the Ozarks (SW Missouri, NW Arkansas, NE Oklahoma) is much safer than when I was in jr. high and high school (1995-2001) when there was a huge amount of rural violence associated mostly with the meth trade. I had friends and neighbors killed both purposefully and incidentally with this, none of whom were users or involved in the trade in any way--just children or landlords or neighbors or whatever. We used to get out of school because the cops would be raiding meth labs within shooting distance of the school. This has largely subsided. There is definitely violence associated with the opioid crisis, but the deaths have been overdoses and suicides at least among the people I know.<p>War:<p>There are definitely some nasty and protracted conflicts abroad, but, unlike in 1968, the US is not mired in them and losing thousands of soldiers a year--Wikipedia gives 4,491 deaths for US soldiers for the entire Iraq war, for example. Not that it wasn't a horrible and unnecessary loss, and several orders of magnitude worse for Iraqis than the US, but this was also the case for Vietnam (and Cambodia, Laos, etc). While there continues to be strife in the Middle East and a few other places, and I worry about Africa being a major site of east vs. west proxy war in the coming decades, globally the situation seems more peaceful and democratic than in the Cold War or the colonial era.<p>Societal issues in the US:<p>There is a lot of sound and fury, but protests are largely peaceful. The protests involved with Ferguson, Charlottesville, etc. were much more mild than in the 1960s, or the race riots of Tulsa (1921, 39-300 deaths, 10,000+ people left homeless [2]), LA (1992, 63 deaths [3]), and so forth.<p>Gay rights were granted largely peacefully as well--lots of progress here, though it wasn't frictionless.<p>Similarly, the recent wave of defenestrations of powerful men due to sexual abuse is to me a net positive even if there are inevitable witch hunts associated.<p>Domestic terrorism seems to be less of a concern. I don't know about rates but from what I understand there were a lot of bombings from both far left and far right sources in the 1960s and 1970s. We haven't had any high-profile political assassinations in a while.<p>The rise in single-parent households is a concern. I have no idea how much this reduces the incidence of domestic violence as moms don't live with abusive dads. Maybe a little?<p>Homelessness is also a major concern but is slightly decreasing over the past decade, not even accounting for population growth [4]. I have no idea how it compares to the 1960s or so. I think a lot of mentally ill people were institutionalized then, which isn't the case now AFAIK.<p>The economy:<p>Parts of the economy are good (stocks, general employment levels), and though there are still problems with stagnant wages for many, debt levels (particularly student loans), and housing prices in many places are terrifying (I'm living in SV on a pretty meager nonprofit salary, because my wife has a great job that pays OK). But it's a lot better than it was a decade ago, or in the 70s. There are still huge issues with medical care that need to be resolved, but at least lots of people are employed as paper pushers...<p>Inequality is a major concern, I concede. And I worry about an upcoming stock market crash.<p>I think it's possible that trade work will pay better and better, and as more tradespeople rejoin the middle class communities (neighborhoods, school districts etc.) some of the inequalities of social status between the higher-income blue-collar work and lower-income white-collar work will dissipate.<p>The environment:<p>Way better than in the mid-20th century. We have a lot more protected lands than in the 1960s, and air and water quality are much improved (thanks, Nixon!). From Wikipedia:<p>"In the United States between 1970 and 2006, citizens enjoyed the following reductions in annual pollution emissions:<p>- carbon monoxide emissions fell from 197 million tons to 89 million tons<p>- nitrogen oxide emissions fell from 27 million tons to 19 million tons<p>- sulfur dioxide emissions fell from 31 million tons to 15 million tons<p>- particulate emissions fell by 80%<p>- lead emissions fell by more than 98%"<p>Without looking stuff up, I think that the broad population shifts from rural and semi-rural to urban and suburban since the 1950s are decreasing the pressure on the environment; national forests in most places are recovering from the ~1850s-1950s logging period.<p>Climate change is bad, forest fires are bad, the decrease in insect populations are probably quite bad and may indicate a very rotten ecological foundation. However, the <i>effects</i> of climate change aren't directly tearing our country apart at present; the <i>debate</i> over it (and the underlying struggle for power and authority between science and government vs. church and business) is or did contribute. Once agriculture begins to fail in the Great Plains and CA central valley, it'll get real.<p>---<p>Honestly, I don't get it. Sometimes I think things have been decent for long enough that we've forgotten what it was like when things were really bad--the Civil War, WWI, the Great Depression, WWII, Vietnam, etc.<p>I also think that a lot more of the seeming chaos is that more voices are included than previously, particularly from marginalized communities.<p>This aren't perfect, but are they awful? Or are we just bored and riled up about small matters such as the rhetoric of our politicians, or their seeming inability to get stuff done?<p>[1]: <a href="https://en.wikipedia.org/wiki/Crime_in_the_United_States" rel="nofollow">https://en.wikipedia.org/wiki/Crime_in_the_United_States</a><p>[2]: <a href="https://en.wikipedia.org/wiki/Tulsa_race_riot" rel="nofollow">https://en.wikipedia.org/wiki/Tulsa_race_riot</a><p>[3]: <a href="https://en.wikipedia.org/wiki/1992_Los_Angeles_riots" rel="nofollow">https://en.wikipedia.org/wiki/1992_Los_Angeles_riots</a><p>[4]: <a href="https://journalistsresource.org/studies/government/health-care/homelessness-u-s-trends-demographics" rel="nofollow">https://journalistsresource.org/studies/government/health-ca...</a> |
Reasons Bitcoin Could Fall | seems to me that it is a product that derives its value purely on speculation -- luckily for them--all those who have bitcoin are incentivized to be proponents of it--thereby creating a rise that is similar in scope and manner to virality on the Internet--even if people were to recognize that the value of Bitcoin was derived purely from there being an incentive to be a proponent of it for those who have it--its not clear that this truth would have any bearing nor inspire any instinct for one to get rid of it--despite it not being tangibly valuable--obviously it is reliant on electricity being capable and internet being widespread--both those things are set to continue--so what could cause a mass selling off of Bitcoin? It seems like it has some internal mechanisms to stop the downward spiral virality that you see on things with similarly viral upward ascension--because there will always be people out there with a monetary incentive to convince those peopel who are starting to sell off bitcoin that they should not--and advocates/employees who are basically employed by the good out there spreading the gospel because --its like a sales commission for them to have more people use it--while it is a great benefit to have for those who were left or are provided with a sturdy keep by their parents--it doesnt help those who dont have the money to invest--but as a tool for upper middle class kids to do absolutely nothing and live in a better or as good living situation as their parents--it does slow down some of the wage disparity between that very very top and those who are comfortable--I wish I wasnt a broke tech entrepeneur because its a great thing to have money in--and it is like a part time job--gettin other people into it--raising the value of your money--I think eventually once you have some more competitors enter the market the margins on that upward curve will shift and will create competition that guarantees the paradigm of winners and losers--if one type loses--and the value gets more aligned with the ability to pick the winner in the marketplace--then I think you may see it start to become more of a gamblers fix--and the steady upward gains enjoyed by those currently will be distributed to those who correctly forecasted the specific bitcoin to back. Due to the fact that that volatility doesnt necessarily cause a net change to Bitcoin as a product--winners/losers will make it seasaw a bit more eventually than it currently is modeling on. It mimics what one would think would be the goals of a covert upper middle class organization that seeks to ensure the continuation of capitalism but realizes that capitalism inherently creates winners and losers--so to me, I would not be surprised if it was heavilly backed by foreign governments looking to gain some supporters and appease some of the population that is for the most part getting less poor relative to the proportion of the country's income--this could get people loyal to their governments, while (assuming the government were an earlier investor in this device that incentivizes loyalty and advocation) allowing the governmentts a fairly simple mechanism to collect public funds while appeasing upper middle class part of the population that would otherwise be unrestful --and using them to quell some of the more radical elements of the populist left--who want greater wealth distribuytion--Bitcoin synthetically enables those who would otherwise be loser in the capitalist system to be winners--and as it happens those same winners--are likely the same winners whose parents were also winners relative to their age group (how else did they get the money to invest)--so in that regard it can be seen or is seen by some governments as a tool of maintaining order, and meeting people's expectations--while kinda kicking poor people in the gut who are left to assume that they werent as good as their synthetically inflated peers who (like their parents) found a way to become richer than they--thus confirming an intuition that is not illogical and providing the perfect outlet for capitalism to continue to thrive. Its definitely not going somewhere for awhile, unless the creators have a mechanism for cashing out on everyones money--in which case I would have to lightheartedly chuckle as my now wealthier friends rejoin me on the startup train--whoever wins the war on AI may be able to synthetically mimic volatility and cause it to crash if he commits him/herself to that endeavor--but that would be very risky--unnecessary--but if executed properly with the right amount of capital and with a propaganda machine that acts as the first general AI --that can advocate as a human would--rephrasing that advocacy in manyy different ways--they could find a way to create the appearance of a mass fall--only to buy at a lower point--and then letting the AI be the advocates that the humans currently are acting as--im not planning on doing that though--but if someone else wins the AI war--who knows what they could do--and if they are real assholes--that could be a way for someone to monopolize or take a great amount of contorl of the product wiht |
Indefinite solitary confinement in Canadian prisons ruled unconstitutional | I had a sentence that had me in for many weekends in a row (in Canada). I had some weekends where I was in a room with others, weekends where I was on my own in a very small cell, and other weekends where I was in cell with one other person.<p>1. In a "day room" with 5-8 others. It also had a TV. The television was a blessing and a curse. Since I wasn't the only one in the cell, I didn't have control over it, so it just blared for 12 hours straight. You couldn't leave the room and take a break from it, so it became hard to deal with at times. That room had a bathroom with a door on it, which was nice. With five people in the room it was tolerable. We all had mats on the floor. Once we were up past 5 or 6, it was mat to mat with hardly any room to maneuver. The worst part of this room was that the lights never turned off completely. It was agonizing trying to fall asleep with fluorescents on all night. One nice thing: there were windows up near the ceiling that reflected light onto the wall throughout the day. It's amazing how meaningful a bit of light becomes when you're stuck in a room for 52 hours straight.<p>2. I only spent one weekend by myself in solitary. It was a tiny, filthy cell. There was only enough space for my mat with just enough room left to step around it. The edges of my sheet were black by the end of the weekend. This was a rough weekend. They turned the lights off around 9pm, which was an improvement over the day room. I meditated, that helped. I could feel my mind start to warp and unravel during the weekend, so I can't imagine how others do it for weeks on end. Mid weekend a guard took pity on me and handed me a book. I had to pace myself reading it, trying to make it last.<p>3. Other weekends were spent in cells that were a bit wider. Two inmates to a cell, bunk beds so we weren't on the floor. This wasn't too bad, though it had a real dungeon quality to it with beige metal walls and giant rivets. No privacy when using the washroom - it was right next to the beds so you're face is a couple of feet away from your roommate's bed when you're on the toilet. The ceilings where fairly high, with a tiny window letting in a bit of light. You couldn't see out of it. It was frustrating, because you could see it used to be a full window, but they'd covered up most of it. When it got dark, the bottom of dropped out of me and I was overcome with a really profound sadness.<p>4. The dorm was the best experience by far. I only got one weekend in the dorm which sucked. It had around 16 bunk beds and was quite spacious. Usually they cram it full, which would have been hellish, but there were only 6 or 7 of us, so it felt luxurious. There were large windows, frosted so you couldn't see out, but it was the best thing ever. We don't realize how much light affects our mood. Having that natural light stream in all day was like mana from heaven. The dorm had two tables with benches which meant I didn't have to eat balancing my food tray on my lap, sitting on my mat. It had a TV, but the room was large enough you could go to the other end and sort of take a break from it. I got a deck of cards that weekend, which was a welcome distraction.<p>5. Pods: these were similar to #2, but smaller. They had a small bench to eat at, but other than that, just enough room to pace back and forth. They did have windows that you could actually see out of. That was a treat. I would just sit and stare at the trees and the sky for hours. I would feel quite melancholy and sad, but seeing the world outside made me feel somewhat connected. The cell was small enough that it felt quite psychologically oppressive. I know how a lion feels, pacing back and forth at the zoo. It's sign of mental distress, and I couldn't stop myself from doing it. One thing I noticed is that you have to find a way to break up the monotony of the day. Pace for a bit, lie on bed, pace, sit on the bed, stare out the window, lie on the bed but looking at a different section of the wall. Sunday nights were the worst. It was difficult to fall asleep, and I always ended up waking hours before we were released, not knowing what time it was. Most weekends I could get a sense of time, but in #2 and #3, there was no way to tell the time. It really messes with your mind, not knowing if an hour has passed, or 4 hours, or 30 minutes. I was fortunate in that it only ever lasted for 52 hours, but even that was difficult to endure some weekends. |
Google, You Creepy Sonofabitch | I owned Google Pixel last year. I found the constant pestering from Google Maps to upload more photos very annoying. I returned after a while.<p>I sent the following issues to Google customer service and returned the device. I am putting that here with a sense of respect to both Google and hn community. Some of it is dated.<p>I am very unhappy with the Pixel purchase. It is not worth the price. I am an iPhone user who switched to Pixel after seeing the Pixel event. I am also a Google fan but I found the experience with Pixel jarring. Below is my experience so far. I am going to return the Pixel and go back to my old iPhone 5s.<p>1. My experience with music on Android
I did not like Google Play Music app because the free version meant having to put up with ads. Also I did not like to use data to play the songs I already owned. So I downloaded the songs on Google Play Music. But I did not like the Google Play Music app interface. So I read reviews of the best music player and bought Neutron and Poweramp. But both music apps were not able to use the music which Google Play Music downloaded. So I spent an hour to find ways to move music from my pc to Pixel and did it with windows media player. Now I started using Neutron and I chose the automatic option to find music and it started played Podcast files which is not what I expected (I had bought Pocket cast and downloaded some podcasts earlier). Then I used Poweramp. But what I noticed was a bug in Android OS. When I connected the 3.5 mm audio jack and disconnected a call I was on, both Neutron and Poweramp started playing! Shouldn't Android OS allow only one player to play music. Isn't that the most basic test case! So playing music on Android was a bad experience. When I close the app the music was still playing. The app was sitting in the notification area and still running. I wanted to stop playing music when I closed the app. Why doesn't Google make a decent default music player for Android? And why does the music start playing when I connect the 3.5 mm cable. I may not want to hear music but may want to listen to a podcast. The music experience has been horrible on the Pixel.<p>2. Using Google Assistant to make a conference call
I pressed and held the middle button and said "call Jack". It brought up a list of numbers and chose one and the call failed. This is a conference call number. The contacts has a comma in between two numbers. Comma is used for a pause. Google Assistant dropped the comma and dialed the number and the call failed. Because of this bug in Google Assistant I can't use it to make a call to a conference all.<p>3. Podcasts
I bought Pocket Cast but on the locked screen I don't see the 10 second rewind and forward buttons. I only see the back and next buttons. Whey can't I see the 10 second rewind and forward buttons? Why do I have to unlock the phone to access those two buttons? It is irritating to hear 30 minutes of the podcast only to press the back button and be put in the beginning of the podcast.<p>4. iTablaPro
Android does not have a decent alternative to iOS' iTablePro app. The swar systems app is not that good. For certain user compaints on the Google Play store, the developer asks the customer to go to a certain folder and delete files. Why expose a customer to such intricate technical issues. Also there are certain issues that happen on other models that is related to timing. Generally I notice that the quality of apps is not good compared to iOS.<p>5. The back button
Using the back button is confusing. Sometimes instead of taking me to the previous screen, the app takes me to the home screen. Why?<p>6. No headphones
I paid $750. So why doesn't Pixel come with a headset?<p>7. Video editing
I found it hard to trim and cut videos. I did not find a default android app to edit and trim videos.<p>8. Music snycing
If I delete a song how do it sync it so that it deletes on my pc as well? Why isn't Google making a good music sync program.<p>9. Local search results on the phone does not include the music files on the phone. Why?<p>10. When I turn on my Bluetooth headset the music plays on it instead of the 3.5 mm jack. I don't see any easy way to choose 3.5 mm jack for songs and take calls using my bluetooth headset.<p>In all I am not happy with the price I paid for Pixel and the Android os and apps seem to work poorly. Android is poorly designed without paying care to the needs of the customer. I am deeply dissatisfied. It hurts me to say it because I am a big fan of Google as a company and like many of its services. |
Ask HN: Has Amazon.com gone downhill? | As a consumer, Amazon is amazing in terms of breadth, price and ease of use. I'm not claiming that they have every product, nor are their prices always the best. <i>But</i> their selection is very good and their prices are typically very competitive. And if I'm just buying run-of-the-mill stuff that doesn't require too much research (e.g. a USB memory stick), it takes me about 60-120 seconds to search the site and check out and I'll almost always receive the product on time, 2 days later. Very few other retailers can match the shipping time. Plus, when I have had issues with products, I've generally been very happy with their customer service.<p>However, from a consumer's perspective, I have two major issues with them:<p>1) Knock off and counterfeit products - there seems to be more and more of them these days, especially with third party sellers or even fulfilled-by-Amazon items, since it seems like they might co-mingle inventory. If I'm making a large purchase where there is risk of buying a counterfeit item (e.g. Bose headphones), I tend to go elsewhere these days, unless Amazon has a very competitive price. In particular, for electronics, Best Buy will match Amazon prices and I trust their supply chain more. I also find that B&H has excellent customer service, a great selection, great interface and are reliable to work with.<p>2) Suspicious Reviews - No doubt there are a lot of fake reviews on Amazon. When you're talking about the largest online retailer where reviews make or break a product, obviously there are going to be fake reviews. Then there are products with what I'd call "hired reviews". I believe that this is no longer permitted, but this is where a company gives free (or very cheap) samples in exchange for an "honest" review of the product. Naturally, people who are given free (or cheap) samples in exchange for writing a review are going to give biased reviews. Finally, for many items, it could be as simple as friends and family writing reviews of your product, which are also going to be biased. Because there's so much incentive to game reviews, it's gotten to a point where I no longer trust them anymore [1].<p>As a consumer, what I'd really like to see is a competitor to Amazon with a more curated selection. For example, I don't need 250 different options for blenders. What I would prefer is a site that offers 5 options at various price points which have all been vetted by the store owner. User reviews are helpful, but most users aren't really familiar with the alternatives. I can say that I like my blender, but at most, I have experience with one or two other blenders. However, when a store-owner says they prefer a blender, I have more trust that they have experience with the other models (even if they haven't used them, they've seen the returns and/or customer complaints) [2].<p>From third party sellers' points of view, I've heard many horror stories of issues they have had selling with Amazon, but I won't get into that right now.<p>[1] There are two other big issues with user reviews in general. The first is that when a product (or establishment) gets enough good reviews, people buy the product and are inevitably disappointed. This is an even bigger issue with reviews for things like restaurants. For example, say a local diner gets 100 five-star reviews on Yelp. Now, people come across the place on Yelp and they see it has great reviews and so they decide to check it out. But they came in with very high expectations so they are inevitably disappointed and so when they write their own review, it will be lower than it might have been if they had gone in with average expectations and were pleasantly surprised. The other issue with user reviews is that every user has different needs and expectations and many of them won't match mine. For example, say I'm a professional photographer reviewing the Sony RX100 V camera, which retails for about $950 and is popular with amateurs as well as professionals looking for a small sized travel camera. If I'm a professional in need of a very small camera for travel, I've got completely different expectations than an amateur who is using this as their primary camera. For example, $950 might be a lot to spend on a camera and an amateur might justifiably feel that the $500 alternative does the job 90% as well. But for a professional, $950 is almost a budget camera and if it performs even a little bit better, the difference in price might be negligible. Needless to say, the reviews from both professional and amateur users of this camera get grouped together on Amazon, which ends up meaning that the reviews are less helpful for both groups.<p>[2] This is why I love sites like The Wirecutter, where they examine five to ten options and pick the one they like best. Consumers generally don't do this and so I find a single review from the Wirecutter to be more useful than 1000 reviews on Amazon. |
Show HN: Convert image to a prime whose binary representation matches the image | Here is a T-Rex<p><pre><code> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 1 1 1 0 0 0 0 0 0 0 0
0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0 0 0 1 0 0 0 0 0 0 0
0 0 0 1 1 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0
0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 0 0 1 0 1 0 0
0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 1 0 1 1
</code></pre>
And here is an Argentinosaurus<p><pre><code> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 1 1 1 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 1 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 0 0 0 1 1 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 1 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 1 1 1 1 1</code></pre> |
Linus Torvalds: “Somebody is pushing complete garbage for unclear reasons.” | Haha, two mails down, David Woodhouse doesn't disappoint and sends a message that'm copying here:<p><a href="http://lkml.iu.edu/hypermail/linux/kernel/1801.2/05282.html" rel="nofollow">http://lkml.iu.edu/hypermail/linux/kernel/1801.2/05282.html</a><p><pre><code> I think we've covered the technical part of this now, not that you like
it â not that any of us *like* it. But since the peanut gallery is
paying lots of attention it's probably worth explaining it a little
more for their benefit.
This is all about Spectre variant 2, where the CPU can be tricked into
mispredicting the target of an indirect branch. And I'm specifically
looking at what we can do on *current* hardware, where we're limited to
the hacks they can manage to add in the microcode.
The new microcode from Intel and AMD adds three new features.
One new feature (IBPB) is a complete barrier for branch prediction.
After frobbing this, no branch targets learned earlier are going to be
used. It's kind of expensive (order of magnitude ~4000 cycles).
The second (STIBP) protects a hyperthread sibling from following branch
predictions which were learned on another sibling. You *might* want
this when running unrelated processes in userspace, for example. Or
different VM guests running on HT siblings.
The third feature (IBRS) is more complicated. It's designed to be
set when you enter a more privileged execution mode (i.e. the kernel).
It prevents branch targets learned in a less-privileged execution mode,
BEFORE IT WAS MOST RECENTLY SET, from taking effect. But it's not just
a 'set-and-forget' feature, it also has barrier-like semantics and
needs to be set on *each* entry into the kernel (from userspace or a VM
guest). It's *also* expensive. And a vile hack, but for a while it was
the only option we had.
Even with IBRS, the CPU cannot tell the difference between different
userspace processes, and between different VM guests. So in addition to
IBRS to protect the kernel, we need the full IBPB barrier on context
switch and vmexit. And maybe STIBP while they're running.
Then along came Paul with the cunning plan of "oh, indirect branches
can be exploited? Screw it, let's not have any of *those* then", which
is retpoline. And it's a *lot* faster than frobbing IBRS on every entry
into the kernel. It's a massive performance win.
So now we *mostly* don't need IBRS. We build with retpoline, use IBPB
on context switches/vmexit (which is in the first part of this patch
series before IBRS is added), and we're safe. We even refactored the
patch series to put retpoline first.
But wait, why did I say "mostly"? Well, not everyone has a retpoline
compiler yet... but OK, screw them; they need to update.
Then there's Skylake, and that generation of CPU cores. For complicated
reasons they actually end up being vulnerable not just on indirect
branches, but also on a 'ret' in some circumstances (such as 16+ CALLs
in a deep chain).
The IBRS solution, ugly though it is, did address that. Retpoline
doesn't. There are patches being floated to detect and prevent deep
stacks, and deal with some of the other special cases that bite on SKL,
but those are icky too. And in fact IBRS performance isn't anywhere
near as bad on this generation of CPUs as it is on earlier CPUs
*anyway*, which makes it not quite so insane to *contemplate* using it
as Intel proposed.
That's why my initial idea, as implemented in this RFC patchset, was to
stick with IBRS on Skylake, and use retpoline everywhere else. I'll
give you "garbage patches", but they weren't being "just mindlessly
sent around". If we're going to drop IBRS support and accept the
caveats, then let's do it as a conscious decision having seen what it
would look like, not just drop it quietly because poor Davey is too
scared that Linus might shout at him again. :)
I have seen *hand-wavy* analyses of the Skylake thing that mean I'm not
actually lying awake at night fretting about it, but nothing concrete
that really says it's OK.
If you view retpoline as a performance optimisation, which is how it
first arrived, then it's rather unconventional to say "well, it only
opens a *little* bit of a security hole but it does go nice and fast so
let's do it".
But fine, I'm content with ditching the use of IBRS to protect the
kernel, and I'm not even surprised. There's a *reason* we put it last
in the series, as both the most contentious and most dispensable part.
I'd be *happier* with a coherent analysis showing Skylake is still OK,
but hey-ho, screw Skylake.
The early part of the series adds the new feature bits and detects when
it can turn KPTI off on non-Meltdown-vulnerable Intel CPUs, and also
supports the IBPB barrier that we need to make retpoline complete. That
much I think we definitely *do* want. There have been a bunch of us
working on this behind the scenes; one of us will probably post that
bit in the next day or so.
I think we also want to expose IBRS to VM guests, even if we don't use
it ourselves. Because Windows guests (and RHEL guests; yay!) do use it.
If we can be done with the shouty part, I'd actually quite like to have
a sensible discussion about when, if ever, we do IBPB on context switch
(ptraceability and dumpable have both been suggested) and when, if
ever, we set STIPB in userspace.</code></pre> |
Thousands of Turks accused of using Bylock app despite never having used it | Hello everyone. I am one of the victims of these Bylock cases and the tracking code or advertising links etc.<p>I had been trialled with 15 years heavy jail time and it was definite for me to get sentenced at least 6 years 3 months if this particular iframe code was not found in an old phone’s temporary files.<p>Actually after the iframe code was found by one of the official Computer Forensic expert who has been employed by one of the official Bylock cases public prosecutor, he and the public prosecutor had kept it secret from the public and tried to burrow it. Until some hero got screenshot of this official finding and leak it to the one of the bravest lawyer in Turkey whose name is Ali Aktaş, we had no concrete proof of tracking codes or advertising links.<p>I know you cannot believe that 100,000 people are getting sentenced solely based on reverse tracking of CGNAT logs (IP access request logs kept by ISPs). Yes you are right. I myself also couldn’t imagine such thing would happen in my beautiful country, hell even in banana republics I couldn’t imagine. But one morning I wake up and anti-terror forces break into my house to arrest me and seize all of my digital devices until an unknown time. It was truly shocking experience and I had no idea how could this happen to me while I have never used that stupid, amateur and lame communication android App Bylock. Yes Bylock was very insecure APP which allowed passwords as 0, did not hash (not even MD5 but there are some claims that the very late versions of the APP were hashing user passwords and put some password restrictions such as min 8 characters) user passwords, kept all of the private keys in database so the administrator could read all of the messages, did not use any cloud server, cloud service or implemented proxy system to prevent IP tracking and so on. All of these are written in the Turkish Intelligence (MIT) official technical Bylock report.<p>I was lucky that I was not at home that time so I had opportunity research and prepare defence. After few days research, I found out that the our brightest brains in the Information and Communication Technologies Authority (BTK) decided to scan back all of the CGNAT logs and determine whoever had made any access request to the one of the IP address which are determined to belong Bylock server and put all of the citizens on Bylock users list who have access to these IPs. That is how ridiculously I was put on that list.<p>But even the preparation of list cries out loud the erroneous procedure that is being followed. First of all, our Information and Communication Technologies Authority (BTK) determines CGNAT start date after 5 months of the Bylock app release (release date is April 2014 but CGNAT scan start date is August 2014). So during this period of time whoever used Bylock APP or made access to IPs of Bylock server are not put on the list. Moreover, they found out that total 250,000 individuals have made at least 1 IP access request to the Bylock server IPs. Since 250,000 is too many to put in jail, they come up with this “fantastic” rule. They filter the list and remove citizens from the list whoever made only 1 day or 2 days access to the IPs. So they claim to the media that they have removed people who actually did not use the software. With this “so scientific” approach, the list of Bylock users reduces to 110,000 individuals from 250,000 individuals. So who has access to the IPs 1 or 2 days life continue and who has access 3 days like me, life’s turns in to hell.
Also according to the Europol, CGNAT system has up to 80% error rate when tracking individuals. It wouldn’t be a push to assume that Turkish ISPs also have very crappy CGNAT logging system. Proper CGNAT logging system with proper equipment and safety brings too much cost to the ISPs with 0 economic income.<p>So how are the prosecutions are made in Bylock cases? It is very simple. If you are on the list, you are 100% guilty and you get to sentenced jail time no matter what from the guiltiness of being one of the member of Gulenist armed terrorist organization. No one has been granted acquittal so far whose name is on the list. Yes solely based on CGNAT logs with 3 lines you get to sentenced. Only those names get erased from the list or determined that someone else actually used that GSM phone number or used that wifi are granted to acquittal. There is no way that you can prove you didn’t use Bylock at all or you didn’t use it for crime purposes.<p>The list even contains so many citizens who could not technically register Bylock software. Bylock was an application similar to WhatsApp. It did periodic polling to the server to get new messages or voice calls. Tuncay Beşikçi determined it to be approximately every 15 seconds from some of the real user CGNAT logs. Also it is determined to be at least 6 IP requests are necessary to complete registration to the application. While these are technical facts, there are about 20,000 people on the list whose total CGNAT logs are lower than 20. Can you imagine the bullshitness that is happening? There is no way to use such software with total 20 IP access logs but these people who is still left on the list is getting at least 6 years 3-month jail time. My CGNAT was lower than 10 and there was only maximum 2 IP access request in the 5 minutes period (we can assume someone would complete registration in 5 minutes). So the only evidence that is used against me was actually technically 100% proving that I did not register the Bylock APP ever! But what happened in the court? The judge didn’t give acquittal to me until my name was removed from the list. There are also many other victims who are already condemned to 6 years jail time with less than 10 lines of CGNAT logs and with only evidence as CGNAT logs.<p>So how did Bylock become this kind of tool of mass arrests? With total and constant poisoning of media. According to the media and so for the Court of Cassation in Turkey, Bylock was following kind of software
1: It had reference and activation feature from the canter of Terrorist organization from Pennsylvania.
2: No one could make IP access requests to the Bylock server except those who had successfully registered to the application and whose registration was approved by the central command center of Terrorist organization
It makes sense to arrest from CGNAT logs if the software was like this right? But both pre-acceptance are 100% wrong. Bylock was freely available software on Google Market and anyone who wanted it could download it from Google Market until 3 April 2016. Check out AppBrain Bylock page for more information. Also it didn’t have any kind of reference or activation system for registration or usage. Its usage system was very similar to Skype or Discord. What kind of system it was is properly documented in technical report of Turkish Intelligence (MIT). Moreover, no one can prevent who makes IP access request to any IP. Even your access to the particular IP is blocked by that IP’s owner, you request would fail but still it would be logged in CGNAT and CGNAT doesn’t log whether you have successfully connected to the IP, or duration of the connection or how many bytes you have transferred. At least all of these are not stored in Turkish ISPs CGNAT logs. In a Turkish ISP’s CGNAT there are only these data. Access request start date, access request destination IP, access request destination port, access request private port and private IP.<p>Also since you do not know the atmosphere in Turkey, you cannot correctly evaluate this article. I am sure that Tuncay Beşikçi also knows this was simply tracking code or advertising links etc. But if you tell that Bylock was advertised or used tracking codes like other Apps etc, no one would believe you and hell they would blame you and perhaps arrest you. So announcing that all these tracking codes advertising links were a clever plot of FETÖ (gulenist terrorist organization) was only way to rescue some of the people who had never used Bylock app.<p>This is the summary of the Bylock cases which still goes on. I am guessing that at least 60,000 people were put in jail at least 1 day due to Bylock since the beginning. There are still about 40,000 on the list waiting to be ambushed by Terror Forces until they learn they are on the Bylock list. I believe many of them even don’t know they are on the list since they didn’t use Bylock. Moreover, even though I have had full acquittal, all of my digital devices are still held seized.<p>Finally, I am very shocked that Bylock cases has almost 0 coverage in developed countries. They are totally blind to these cases. |
Ask HN: I just got fired | Hang in there. There are some great replies here that I hope you take to heart, especially the psychological side of it. It sounds like you haven't been through this before (it still sucks regardless of how many times it happens).<p>Some practical questions:<p>1) Were you fired or laid off? There is a difference and it especially makes a difference when you apply for unemployment. How did HR or your manager frame it?<p>2) UI. I see you already applied which is great. The process is different in every state (assuming you're American here so correct me if I'm wrong). In CA, it takes about 2-3 weeks after you apply to get your first check.<p>3) Follow what the others said about confirming what they are going to say about you if someone calls for references. Most companies now cover their asses by only giving dates of employment, even if you were were a great employee.<p>4) If you made good friends at the company, they're probably wondering what happened to you as well. Be sure to reach out, go for a drink or two, etc. I still have drinks and am good friends with people from various jobs throughout the years, some of which ended like this. When I was laid off from my last job, my personal phone and email blew up because the only thing the company said was "X is no longer with us" in an email<p>5) Give yourself a certain amount of time to look over and reflect on what went right and what went wrong but then cut it off and don't dwell. There are probably mistakes made on both sides that you can learn from.<p>6) Go ahead and get your resume out there, LinkedIn spiffied up, and Indeed profile updated. Know what you want from your next position too that you didn't like about this one (see #5). The good part about working a shitty job and these experiences is that it teaches you what you don't want in life and work.<p>Personal stuff:<p>I thought mattmanser wrote a great reply for this, as well as some others.<p>1) Have you had time off in a while or did you sacrifice some of that for the job? If you want to take a short break and clear your head and heart, now is a great time to do so. I was let go in July and wish in August or September I had taken at least a week or two elsewhere to decompress.<p>2) You mentioned shame. There's no shame here. This shit happens to a lot of people, especially in this more cut-throat day and age. You're friends, if they truly are friends, are going to empathize with you, let you talk it out, and hopefully buy you a round of beers or three.<p>As someone else said, it's business. Sounds like if you were laid off you're in an at-will state. Think of it this way: Employers expect us to give them at least two weeks notice but they usually have policies of laying off people immediately and sending out the door that day. It's an unbalanced power relationship.<p>3) If you may not realize it until too late, but it's easy to get really down-down and time will slip away. Nothing wrong with sleeping in the first day or two off but if you regularly find yourself just staying in bed all day, overthinking things, that's starting to trend to minor depression. Be sure to get outside, exercise, see people, etc. It sounds like you already have that going for you.<p>If that shit does take hold, and even in general, I highly recommend this book called "How to Be Miserable" that uses counter-examples to make you think about things. I bought it after coming across the CGP-Grey video that discusses it:<p>Video: <a href="https://www.youtube.com/watch?v=LO1mTELoj6o" rel="nofollow">https://www.youtube.com/watch?v=LO1mTELoj6o</a><p>Book: <a href="https://www.amazon.com/How-Be-Miserable-Strategies-Already/dp/1626254060" rel="nofollow">https://www.amazon.com/How-Be-Miserable-Strategies-Already/d...</a><p>4) You were already unhappy so see this as an opportunity, a possibility, vs a negative. Work on a project you've always wanted to start but didn't have the time or energy. Work on your job applications and/or with a recruiter for a certain amount of time then block out time to be creative. Who knows, you might create a job for yourself and then only you can fire you!<p>5) Overall, good luck! It sucks but you will come out stronger for it. There are always options out there. It can be a struggle to stay positive and confident. The pain and anger of this will pop out of nowhere at times but just realize what's happening and let it go.<p>I've been out of work for six months now, some of it due to being picky about my next job, some of it with a month or so missing because of aforementioned depression, and now, my UI has run out and I have about two months to make something happen here.<p>But, for once, I'm really fucking positive about my future. I'm spending a part of the day applying to work and other parts of the day learning new things that will help out in the short to medium term. If shit doesn't work out here, my backup plan is to say fuck it and go to Taiwan and teach English for a year where I can save money as well as continue to pursue my learning.<p>You always have options it's just hard to know what they are sometimes. Keep talking to people, going to meetups, participating in life. You're going to be fine! |
Ticks may be the next global health threat | I am currently going through Lyme & Bartonella treatment from a tick bite in northern Michigan last summer. I've been to multiple doctors, had multiple blood panels and blood tests(Lyme, Bartonella, Babesia, Erlichia, etc). I wanted to share what I've learned in hopes that it helps others. This is what I've learned (it's kind of lengthy):<p>- Lyme exists everywhere on Earth as delivered by avian/mice/tick/deer/animal populations and it is adept at survival in many hosts for a long time. These tick bacteria are very adept at subverting portions of a hosts immune system to maintain their existence in the host. Healthier or adapted hosts can clear the bacteria, in all forms, easier that immuno compromised hosts.<p>- Ticks can transmit Lyme within 10 minutes of feeding depending on the blood circulation from the host and saliva transmit rates. Co-infections like Bartonella (24+ strains), Babesia, Erlichia happen in a very large percentage of bites. The 'bullseye rash' may or may not appear depending on Lyme strain, the host immune system, and how the two interact.<p>- Lyme feeds on collagen is ubiquitous in the body, that is why symptoms can occur everywhere and sporadically depending on immune response (local and systemic). Lyme uses Manganese to facilitate it's "feeding process" (Stanford has a in-vitro study on Claritin which disrupts Manganese in this mechanism).
I've had brain fog, slight dizziness, eye film, tendon inflammation (ankles/elbows/hands), sore neck/shoulders, streaked rashes (bartonella), all sporadic.<p>- 6 weeks after tick bite, I successfully diagnosed myself with Lyme and Bartonella. This took 3+ months for a handful of doctors and blood tests before doctors agreed. (Listen to your body, but don't go nuts and overexagerate ;-). See below<p>- A dermatologist prescribed 1 200 mg 1 week after my tick bite roughly following CDC guidelines (useless). ~17 days after tick bite a 100 fever with severe achilles tendon inflammation (painful almost ruptured). Started 3 week Doxycycline 100 mg (2x day) from Urgent care doctor. ELISA, Western Blot IgG, IgM both negative (summary only, no antibody blots provided). Some brain fog improvement on Doxycycline. 2 week flares of my achilles tendon issues while on Doxycycline. Doxycycline extended to 5 weeks with no improvement. My primary care doctor said 3 weeks cleared it up, remaining symptoms were from something else (I dropped that dr). Next internist did more blood tests and suspected something. Out of pocket Igenex (out of pocket) test at 2 months indicate negative Lyme, with trace amounts of 3 Lyme specific antibody stains and 1 antibody shared with chlamydia. Infectious Disease doctor at 2 month mark said I was cured and that they would only treat symptoms caused by unknown with rheumatologist and neurologist with pharmaceuticals. Infectious Disease dr ignored my Bartonella claim due to ankle tendon pain and a week later ignored my email photo of Bartonella specific rash. I timed another set of Bartonella tests with my primary care (new) at 3 months and a peak in my symptom cycle. An insurance covered PCR (dna) test was negative while out of pocket Igenex Bartonella FISH (antigen) test that I found and asked the doctor to submit, was positive from blood samples taken that time/arm. My primary care y is striving to learn more about Lyme now and admitted the CDC was incorrect. I started acupuncture and Chinese medicine 3+ months in and it makes a difference. I finally saw a Lyme doctor 4+ months after the bite (I was on the waiting list for 2 months), I am getting additional western herbal treatment in combination with Chinese herbals. Things are improving but I get occassional spikes in symptoms (all over), which is usually what people are hampered by during treatment (antibiotics and/or herbal). Sadly, my case history is less severe compared to many.<p>- Lyme testing (antibody/dna/antigen) via blood is problematic because the bacteria prefers to feed on collagen tissue and is rarely hanging out in the blood flow. The CDC, per Infectious Disease Society of America (IDSA), recommends a garbage test first (ELISA - inconsistent results from same blood draw). The Western Blot results are often summarized instead of displaying entire results (it ignores Lyme specific antibodies related to defunct vaccination that almost nobody received). USE Igenex (Palo Alto), Galaxy Labs, or dnaconnexions for more reliable results (still not guaranteed).<p>- Lyme vaccination would be extremely difficult since the bacteria can change its outer surface protein (OSP) depending on host and location within the host. Vaccination antibodies work against these proteins. The Lyme vaccination from the mid 1990s targeted protein 34 and 1 other OspA (I cannot remember). Both of these antibodies ARE IGNORED by the CDC Western Blot test unless you used a lab with more expertise.<p>- The CDC and Infectious Disease doctors ignore millions of clinical observations and real world treatments in favor of small studies subject to selection bias (see uptodate.com and read through the studies). ILADS doctors, herbalists, and Chinese medicine/acupuncture actually treat chronic patients and their clinical perspective and evidences is often discarded as non-scientific by the CDC (which is bullshit, this data is all relevant). The CDC does not emphasize and has spent almost no resources to iteratively address the shortcomings in their positions and improve poor testing.<p>- Because of poor testing and misinformation from the CDC, many people are misdiagnosed with MS, Dementia, arthritis, Alzheimers, etc and I've met a number of them in my limited circle of contacts. Kris Kristoferson's case is very common. For example, brain autopsies of beta-amyloid deposits (Alzheimers), have many of these bacteria in them. In short, antibiotics are not sufficient to clear the bacteria in ~100,000+ cases per year (lots of studies and clinical cases showing antibiotics won't work). Many people are prescribed steroids or other immune suppressing pharmaceuticals, or associated 'medicines' for neurological conditions (I was pushed to a rheumatologist).<p>- Treatment: Chinese herbal, Western herbal, some antibiotic, vitamin supplements, acupuncture, intermittent water fasting (16-24 hours helps me with herxing or bacteria die off). Fasting also helps with autophagy and neutrophil trap activity (neither one of those works well with sugar in the system).<p>- Improvements: awareness, better bacteria testing, better blood panels to track immune state at regular intervals throughout life (cytokine, chemokine, interferon, etc), more awareness of treatment options (pharmaceutical and/or herbal). EDUCATION and EMPATHY should be major goals of any doctor.<p>- Resources: www.ilads.org, "Healing Lyme" by Buhner, Lyme Literate Doctors, and own/friends/family experience.<p>- key points:
listen to your body, educate yourself, be your own advocate, be open to treatment approaches, strengthening your immune system is always the best option (some genetic abnormalities aside) |
The Shallowness of Google Translate | >> <i>“South study walking” is not an official position, before the Qing era this is just a “messenger,” generally by the then imperial intellectuals Hanlin to serve as. South study in the Hanlin officials in the “select chencai only goods and excellent” into the value, called “South study walking.” Because of the close to the emperor, the emperor’s decision to have a certain influence. Yongzheng later set up “military aircraft,” the Minister of the military machine, full-time, although the study is still Hanlin into the value, but has no participation in government affairs. Scholars in the Qing Dynasty into the value of the South study proud. Many scholars and scholars in the early Qing Dynasty into the south through the study.</i><p>> <i>Is this actually in English? Of course we all agree that it’s made of English words (for the most part, anyway), but does that imply that it’s a passage in English? To my mind, since the above paragraph contains no meaning, it’s not in English; it’s just a jumble made of English ingredients—a random word salad, an incoherent hodgepodge.</i><p>> <i>In case you’re curious, here’s my version of the same passage (it took me hours)</i><p>I stopped reading here for now, to avoid having his translation affect what I am about to do.<p>What Hofstadter doesn't really go into is that I can still manage to extract <i>some</i> information from the machine translation, compared to <i>none</i> for the original Chinese. Not only that, interpreting machine translations itself is a skill. In a sense, instead of learning a second language, one learns to translate <i>poorly machine translated English</i>. Of course, one can still ask whether that's a good thing or not. Here's my attempt:<p>> <i>“South study walking” is not an official position, before the Qing era this is just a “messenger,” generally by the then imperial intellectuals Hanlin to serve as.</i><p>“South study walking” is GT's best attempt at labelling an unofficial position taken by intellectuals, comparable to being a messenger for the emperor.<p>> <i>South study in the Hanlin officials in the “select chencai only goods and excellent” into the value, called “South study walking.”</i><p>It was a position only available to highly-qualified <Hanlin officials>. Quick google search for "Hanlin": <i>The Hanlin Academy (Chinese: 翰林院; pinyin: Hànlín Yuàn; literally: "Brush Wood Court"; Manchu: bithei yamun) was an academic and administrative institution founded in the eighth-century Tang China by Emperor Xuanzong in Chang'an.</i> <a href="https://en.wikipedia.org/wiki/Hanlin_Academy" rel="nofollow">https://en.wikipedia.org/wiki/Hanlin_Academy</a><p>.. so people from Hanlin academy, suggesting the position was administrative in nature.<p>> <i>Because of the close to the emperor, the emperor’s decision to have a certain influence.</i><p>The position was close to the emperor, giving those who held it some influence over him.<p>> <i>Yongzheng later set up “military aircraft,” the Minister of the military machine, full-time, although the study is still Hanlin into the value, but has no participation in government affairs.</i><p>"Study is still Hanlin" is likely referring to the “South study walking” position, since we established the connection to Hanlin earlier. With that, this reads as: Yongzheng set up a ministry of defence, which meant the position was excluded from direct government affairs, although there was still value in having the position.<p>> <i>Scholars in the Qing Dynasty into the value of the South study proud. Many scholars and scholars in the early Qing Dynasty into the south through the study.</i><p>Many scholars in the Qing Dynasty have taken the position of "south study walking", and it was a prestigious position.<p>I'm sure this is terrible, full of errors, and even the information I correctly inferred undoubtedly misses a lot of nuance, but again: it gives <i>some</i> sense of the information the original passage contains.<p>So here is Hofstadter's translation.<p>>> <i>The nan-shufang-xingzou (“South Study special aide”) was not an official position, but in the early Qing Dynasty it was a special role generally filled by whoever was the emperor’s current intellectual academician. The group of academicians who worked in the imperial palace’s south study would choose, among themselves, someone of great talent and good character to serve as ghostwriter for the emperor, and always to be at the emperor’s beck and call; that is why this role was called “South Study special aide.” The South Study aide, being so close to the emperor, was clearly in a position to influence the latter’s policy decisions. However, after Emperor Yongzheng established an official military ministry with a minister and various lower positions, the South Study aide, despite still being in the service of the emperor, no longer played a major role in governmental decision-making. Nonetheless, Qing Dynasty scholars were eager for the glory of working in the emperor’s south study, and during the early part of that dynasty, quite a few famous scholars served the emperor as South Study special aides.</i><p>Well, that definitely reads a lot better, but I wasn't that far off in terms of meaning of the text. And it didn't take me hours (writing this comment took a long time though).<p>I absolutely agree that human translation by experts is an art, that it produces much, much better results, and that we should not let it be devalued. But the value in getting a quick impression, even if flawed, through machine translation should not be undervalued either. It has a very different application. On social platforms, for example, it is the difference between being <i>completely</i> out of the loop of a conversation or still somewhat following it and being able to ask a question for clarification in a shared language. |
Ask HN: Is it time to leave Google? | I think you probably know the answer already. Given the wealth of forums inside Google where you could ask for this advice, you've decided to post anonymously to Hacker News. That probably means there's some underlying issue, like you think your manager is going to punish you if he or she finds out that you aren't happy with your position, or you think future teams you may want to work with will look upon you unfavorably if they read the post. As soon as you stop trusting your coworkers, it's probably game over. (And I'm not saying you're doing the wrong thing, or that your concerns are unwarranted. They are probably legitimate concerns.)<p>I was extremely happy at Google for many years. I liked my coworkers, I liked my work, I liked my manager. I did get burned out from time to time, but usually there was something interesting to keep me going through the rough patches, and my team, coworkers, and managers were all very supportive of what I needed to do to stay productive and happy (which in a lot of cases was "sleep for 2 days and maybe wake up to have a meeting that would be inconvenient to move"). It was quite wonderful. I had no trouble getting promoted, got "strongly exceeds" performance reviews, and had a lot of fun. Good times.<p>All good things must come to an end eventually, however. I came into work one day and my project was cancelled (and not like "wind it down over the next 6 months", but literally "might as well delete the CLs you're working on") and I hastily transferred to another interesting-sounding team that, in retrospect, I kind of got the hard-sell to join.<p>As it turned out, I didn't really care for the other team that I transferred to, and thought to myself "everyone else on my old team got 6 months to sit at home and research other projects to transfer to, so I'll just look for another project." I did not get that option. I was basically told "you just transferred, so you can't leave." And then told, "you really aren't getting enough work done on your own hours, I want you to be here at 9am so I can make sure you're working." That went as well as you'd imagine. A bunch of people advised me "you're depressed, you should take 3 months off and get some antidepressants". I talked with my doctor and did that. In the end, it had no effect. The third-party company that handles paid leave denied my claim, so it was unpaid leave. I decided to take a vacation right at the end of my leave... which the vacation system decided was invalid and silently discarded. When I was on vacation without cell phone service, Google started calling my parents (I'm 32 BTW) looking for me. It was quite a production when I finally got cell phone service back. 3 months of de-stressing, instantly erased.<p>I got back and started working on a new project under the supervision of my existing manager. He decided that, based on git commit timestamps, I wasn't programming quickly enough. (I got that from another very new manager once, and it was also an App Engine project. I'm not sure if that says more about me or App Engine, but I digress.) To be brutally honest, I'm kind of offended that he didn't consider me to be capable of forging timestamps on git commits. I thought about it, honestly, but in the end decided that experienced managers knows that some things are easy and some things are hard. But in the end, I thought honesty was the best policy.<p>I was pretty stressed out at this point because my manager and I clearly didn't get along, and the project I wanted to work on didn't have official headcount so I couldn't really get out of a bad situation. At that point I wrote up some email to the relevant concerned parties and realized "I do not want to read the response to this email", so I didn't. Some time passed and someone from HR called me saying "you know if you are gone for 3 days, you're voluntarily resigning, right?" I said, "yup." And that was the end of my experience working for Google. I still have my laptop and badge. They still have a box of my stuff (including my beloved Realforce 87UB keyboard!) Oh well.<p>My point is, there are other places to work. Google is a huge company and some people are happy and some people aren't. If you're unhappy, maybe you can find happiness elsewhere. I'll tell you one thing, though... antidepressants won't make you happy about a job you don't like. |
Talk Like a Texan: How Texans Use “Down,” “Out,” “Over,” and “Up” | I hadn't really thought about this much before. I grew up in rural Texas outside Waco. Up and Down are for sure aligned with the map. Over isn't directional though. Out sort of is.<p>Over is something we used when the destination as specific to a person's location. Go over to the neighbor's in any direction. But also go over to my grandma's house. But if you didn't specify to her house, then it was out to East Texas for the weekend, not over. We'd go over to Mickey's place for a week, but out to Memphis. I'm sure this is not generalizable, but that's how it works for me.<p>In general, out is anything that's kind of a pain in the ass, and over is something you're a little happier about. And there's also the "in" to contrast with "out." When I was growing up we went in to town. And back out to the farm. And of all my strange Texas-isms, that's the one that gets me the most shit from New Yorkers. I live in Brooklyn, and so I say, I'm going in to town, when I go to Manhattan. And I'm going out to Brooklyn when I'm headed home. And it makes people seriously angry sometimes.<p>Some of the other strange things I grew up with, that are weird even to me:<p>"Cut the lights on." What? That doesn't even make sense? You can cut something off. You can't cut it on.<p>"I'm fixin' to . . ." mow the lawn, take out the trash, make dinner. Very often used when someone is nagging you about something you are not really about to do. You're just fixin' to do it.<p>"Let's get a colbear." It's a cold beer with the d dropped and a twang on the beer and all said as a single word with the accent on the co. It sounds really weird to this day.<p>We have really weird relationships with units of distance. We don't measure distance in units of length. We measure it in how long it takes us to drive there. I remember getting called out on this not long after I move to NYC. Some people I met wanted to know where the hell Waco is. And I figured if they live here and know any city in Texas, they will know Austin. So I gave position relative to Austin. "Oh about 2 hours up from Austin." 2 hours what? By subway? Jesus, that's terrible. I was like, "No, it's an easy drive. You just get on I35 and your there. Hour and a half if traffic is good and you speed a little." Finally had to break it down. It's a hundred miles.<p>Another related thing is that we don't think twice about driving distances that would have other people looking for a plane ticket. Fun fact: it's a longer drive from Texarkana to El Paso than it is from Dallas to Minneapolis. It's just a thing we grew up with. We like our cars, and we don't mind driving. I lived in Dallas for many years (and am about to move back soon). And I would often leave work an hour early or so, beat the traffic out of town, go have dinner with my parents and siblings (about a two hour drive), then drive home after dinner (another two hours), and not even think twice about it. Or if my brothers wanted to hang out and have some drinks, stay there, and just drive straight to work the next morning.<p>Tea is a weird thing here as well. I love iced tea, but not the way I grew up with it in Texas. I like what used to be hot tea, poured over ice until it's freezing cold. And unsweetened and without milk. But in Texas, there's a process for making sweet tea that's almost as intricate and cantankerous as making BBQ. You boil a quart of water, add 8-12 bags of cheap black teabags and stew those for about an hour. Take the bags out, and bring back to a boil. Then you supersaturate this bitter cauldron with sugar until it's basically a syrup. That concentrate is supposed to be diluted. But the problem is that whenever you're having someone over for an event that would warrant sweet tea, you're too busy making BBQ and doing last minute cleanup to dilute it properly, so you throw it all in an 8-quart pitcher, fill it up to the top with ice, and add some water. The restaurants don't do a much better job. It's basically like mainlining simple syrup with dark color and a bitter flavor. I hate it. But I'd be dead if I ever said that within state boarders.<p>And since I'm getting a little nostalgic, I'll wrap this up.
I've heard before that people who aren't from Texas think of our gas stations as grocery stores and our grocery stores as stadiums full of food. After 3 years in NYC, THIS IS SO TRUE! Nothing has been able to consistently frustrate me as much as trying to grocery shop here. It is so unreal what people have to deal with. And then when you go to 3 different hole-in-the-wall grocery stores (which may involve multiple subway rides, carrying your loot as you go) just to get the ingredients you need for one meal, the prices are so ridiculously high in the stores, that you really haven't saved that much money over just getting crappy takeout or delivery every night. Instead, you've wrecked your back lugging 80 lbs of groceries and wasted your weekend, and are pissed off anyway. I feel genuinely sorry for people who live here. |
Ask HN: What's your most nostalgic memory a life-long career with computers? | 50 years. Somewhat different to most. Mixed hardware/firmware/software, niche talents matched to niche markets. And practically all self-taught. Along the way got a BS and most of an MS in CS but none of it was applicable to my career, nor required for it, except the one class I keep alive was the FSM class. Perfect for Hard Real Time Event Driven applications. My metier. Almost all of the 50 years qualifies as nostalgia, even current, because it has all been fun, all been coninuous learning, all been continuous invention, and except for some esoteric math in my current Audio DSP path there has never been a book about it.<p>1967-9 accidentally hired at Elliott Automation UK as comissioning technician, finding bad wires/doa components in their 4100 series mainframes as they came to the end of the production line, prep for shipping. Completely discrete hand built machines, right down to hand-woven 48K x 24b main magnetic core memory. No prior experience outside of hobby electronics, only 19, CS did not exist then. This experience probably seeded all my first principles of computing, I had to teach myself in intimate detail how a room sized computer works, and how to program it. Both good and bad.<p>1969-71 (I know, by decade, but my trajectory does not fit decades easily) design engineer at an EG&G UK subsidiary called Nuclear Measurements. Another accident. Made instrumentation for the two main Nuclear Fusion labs Daresbury and Culham. Got to climb around inside the Zeta Pinch 1MF capacitor and the Tokamac. By chance visited a PCB layout researcher using my old 4130 with vector graphics, thesis on rats nest automatic layout techniques.<p>1971-5 Another poach, this time into data communications world. I'll note here that although I was then a hardware nut, programming computers is always an essential skill to have. And also in those days of really tight resources, doing it efficiently was default, whatever Tony had to say about premature optimization - the code has to at least fit before it can be made to work. In this phase I even had to design and write an ASM a custom computer a bit more featured than a Z80, only in SSI 7400 series TTL. And in 8008 days, no Z80 when I started.<p>Still a big-eyed kid, somewhat bemused that folks wanted me to actually pay me to do all this learning. Learning that even today seems to be missing from CS/CE, at least as I can tell from graduate hires since then even up to today. Got very good at "Making it up as I went". There was no book, really, for what I was having to produce.<p>1975-89 moved to sister company in LA - poached again - got even deeper into data comms, designing Statistical Multiplexors and protocols for University time share computer centers. Employers made their $Ms but neglected to mention stock options to me. OTOH as a founder and tech lead I had a blast, and seeded a great dev lab culture around me, many of whom are still in touch.<p>1989-97 This was a really fun time; I and buddy did our own start up (Lone Wolf Technologies) based on a mission critical deterministic protocol we had dreamed up and a product using it to network MIDI synthesizers and controllers in large venues and pro studios. Moto 68HC11 box did 4 ports, glass fiber networking physical layer allowed up to 2 Km separation, multiple boxen, each with a mirrored virtual 16x2 config LCD onto the network as a compound entity (edit any box from any box), full soft topology routing and filtering and remapping. Yet more learning OTJ, and inventing. And the customer base was, well, rarified. Herbie Hancock, ELP, INXS, that rarified. Not actually a Good Thing as it transpired. No volume. But oh what fun. Then Paul Allen invested. A long story not for here. But moved us all to Seattle.<p>1998-9 designed a FPGA for autonomous isochronous multi-medial FireWire transport (AFAIK the only one that did not funnel channel data through the CPU). Was missing hardware, this was an intense but ultimately successful gig, eventually made its way back into music world as control surface driver/interface.<p>2000-7 Second time Lone Wolf, soon renamed SingleStep, designing a graphical programming/delivery platform aimed an large scale network management. Again great OJT learning and invention but customer base not so much fun.<p>2008-present now this one is a fun employer with a fun product. Not just fun to my nerd core - fun is their product. I refer to Nintendo. Worked on Audio engines for WiiU and Switch.<p>The two big gaps: "Consulting Years". Nail biting, more like. No nostalgia for those.<p>So, not your typical career path. No front end / back end / full stack crowded job market. I'd have been out of that and unemployable probably 20 years ago if that were the case. |
Ask HN: What's your most nostalgic memory a life-long career with computers? | I’ve worked with computers since the early 80s, starting with a Commodore Vic-20 in 1982. My first program was implementing a complete “choose your own adventure” book. I had never heard of Zork.<p>In high school, we used Apple ][+ and Apple //e computers. I was not able to get into any of the programming classes as a junior, because the classes were all filled by seniors. But they let me into the computer lab in my own time, before and after school and during lunch. After a couple of months had passed, I was already helping the seniors with their work. When I became a senior, I they let me skip the intro class, because I had basically helped teach it the previous year. I went straight into the advanced class, which was Pascal and not Basic. I was on a team of two that finished the class project so fast that the teacher assigned me to a second team where the guy was alone. On that second team, we still finished before any of the others. I graduated in 1984.<p>In college, my first programming class was in COBOL on IBM 3081 mainframes, using punched cards. Then FORTRAN. Later, I got to revisit Pascal. For my Numerical Methods class, I wanted to learn C. So the grad student who was actually grading the homework for the professor agreed, and we got to learn C together. I was so excited by learning C that every one of my programs executed perfectly, and I turned in all my homework before anyone else. I didn’t even have to take the final, because there was no way I could score low enough for that to hurt my grade. The next major language I learned was Prolog, for what was supposed to be a class in Artificial Intelligence. My senior year, I took a Software Engineering course. I finished my work to graduate in August of 1989, but due to problems with my professor failing to give me a grade before he took a job programming for one of the airlines, I didn’t technically graduate until 1990.<p>In September of 1989, I went to work for the Defense Communications Agency, in the basement of the Pentagon. My first programming task was helping to port the classified AIRFIELDS database from COBOL-66 to COBOL-77. However, I had learned UNIX in college, and so they also made me the UNIX Adminstrator in the branch. They gave me a diskless Sun 386i soon after I arrived. I got tired of constantly updating the /etc/hosts file from the HOSTS.TXT file from nic.ddn.mil, and so I set up what I believe was the first caching recursive nameserver inside the agency, and learned a lot about bind in the process. Later, when the agency was having a lot of problems with their Internet e-mail service, I got drafted in to fix that, and I learned a lot about sendmail. I did a few other things, including personally pissing off the Director of the NSA with a letter I wrote to the editors of the Communications of the ACM in 1992 (published in November of that year, see <a href="http://www.shub-internet.org/brad/cacm92nov.html" rel="nofollow">http://www.shub-internet.org/brad/cacm92nov.html</a>), but I still managed to become the official Technical POC for DISA.mil, and I convinced the SIPRnet administrators that they should use the DNS and not HOSTS.TXT tables, as well as using real network numbers as assigned by the NIC instead of just pulling random numbers out of their ass. I also turned several Class B and at least one Class A network back to the NIC, because the agency wasn’t using them. By 1995, I had become the sendmail FAQ maintainer, and that lead to a new opportunity to go work at AOL.<p>At AOL, I helped install their fifth internet email gateway in my first week there. I did a lot of stuff in a short period of time, and became their Senior Internet Mail Administrator. In 1996, I got blamed for taking down all internet e-mail around the world (as a side-effect of the network outage explained at <a href="https://www.cbsnews.com/news/the-day-america-online-went-offline-in-1996/" rel="nofollow">https://www.cbsnews.com/news/the-day-america-online-went-off...</a>), and I have been told that both qmail and postfix were created as a result of that incident. By 1997, AOL had become a toxic enough place that I could no longer survive there.<p>I could go on, but although I remember a lot of these details, I find that I’m not really nostalgic for them. I guess the times were pretty good, but frankly I think the times since then have gotten better and better.<p>Well, maybe not with regards to the recent US presidential elections. But otherwise, I think many things now are much better than they have ever been for many people, and maybe that scares some of the 1%ers at the top. |
Ask HN: Does anyone here suffer from a speech impediment? | I'll split this into parts per your various questions.<p>=-0-=-0-=<p>> Anyone here have a Speech Impediment?<p>In early public grade school [1st-4th grade after being first noticed during Kindergarten] I had to go to the school's speech therapist; I had a speech stutter plus a mild lisp against the sound "sh" as in "shut up", a worst lisp against the sound "ch" as in "chimney", and a medium lisp against pronouncing the letter "h" as in "e-f-g-H-i-j" not the actual sound as in "Hairy Henry".<p>Naturally, I hated being pulled out of class x3 a week for the speech therapy; it just screamed "Hey that kid's SPED" ("SPecial EDucation") and I knew it, and I also knew it wasn't doing me any favors on making friends [the prospect of bullying for "not talking right" be damned, that just wasn't a concern for me then against actual social interaction and social familiarity being time-limited -- don't ask my how I understood this so early on, I just did]. When I switched schools in the middle of 4th grade, I still had to go to the speech therapy but then also the "catch-up sessions" for "being academically behind" too, so now I was forced to double-down on being pulled out of class. Not Good, as now I'm "Double SPED". I had to get out of that crap and the sooner the better; I had to Be Normal [which became a Life Goal, as rational or irrational as you need that to be], especially as the New Kid who just switched schools in the middle of the bloody school year.<p>Before that school year ended, fortunately, I managed to talk my way out of continuing the speech therapy by explaining this "SPED Stigma" to the teachers, parents, and speech therapist involved [I realized the pun of phrasing it this way only 2-3 years ago, and decided to keep it for the telling here]. I got to skip the speech therapy for the rest of time and agreed to just grit my teeth through the Catch-Up sessions until summer came. Luckily enough I had "caught up enough" to be treated as a normal student from 5th grade onward. My stutter was an occasional treat instead of an Always Thing, and the "sh//ch//h" pronunciation issues were just what they were gonna be at that point with no additional helping to be had.<p>By University it was clear to me that my stutter was going to manifest itself when I was overly nervous, overly excited, or mentally impaired [e.g. migraines, booze]. That's fine, no one cared or cares. For the pronunciations, guess what? One single person since 4th grade asked if I "had a lisp" and simply because he was honestly curious and we were cool at the time; if anyone else has ever noticed, they've kept their mouth shut to my face so that's all whatever on that front.<p>I bought a copy of "The King's Speech" after seeing it in theater. The scene with the record-player and headphones in the therapist's office, and the climactic scene with the globally national radio address, are touching.<p>Last year I went to a ComicCon and quite randomly got an autograph from Matt Frewer... AKA Max Headroom. I mentioned that my olde speech therapist from grade school mentioned his character numerous times in attempts to get me to be comfortable and at-least be less self-conscious, and he mentioned how whatever the national "speech therapist stutterers association of America" group at the time kept calling in to the show with good will. He also gave an obligatory compliment toward how I spoke, which I of course very much appreciated. Yes, I promptly framed the autographed Max Headroom picture with no regrets.<p>> What about your work, phone calls, and The Dread Pirate Presentations?<p>My work isn't negatively influenced. Sure, I get nervous during presentations I have to give, but since I'm aware of my own stutter whilst nervous I add in the pauses and other mental gymnastics that can help my speech; I still get error but I'm ok with that for now. Still, no one cares enough to mention; I'm still not a card-carrying member of Toast Masters, but I do listen to The Moth.<p>I've never had problems with conference calls or meetings; sure I'll double-up on stuff like "... what, what about XYZ?" or "But that's, what's that gonna do about ABC?", but that's normal enough conversationalist phrasing to not blip on most people's radar and no one's said anything to my face about it.<p>=-0-=-0-=<p>> Work with people with speech issues?<p>I knowingly have not as far as my memory is concerned, but a few years ago I did interview at a place where one of the higher-ups who interviewed me did have a severe stutter. "Stage 4", honestly speaking.<p>Obviously I was sympathetic and patient -- because of my above history, not necessarily because I was the on the short end of the power dynamic -- and asked for clarification on anything I just couldn't understand from his stutter as if he had had a very heavy accent instead. He seemed used to such interaction and took it in stride. I still felt uncomfortable, a lot of it due to my sympathy and empathy for him; we both had no illusions about what was going on -- there's a severe stutter here, he's clearly making an effort to being understood, I'm clearly making an effort to understand, so let's do this -- but since I was on the short end of the power dynamic I didn't want to play the "I just can't understand you" card every two minutes [yes I needed clarification at that frequency at a minimum if not sooner it was that bad].<p>Note: I wasn't provided an offer for that gig but it had nothing to do with this guy's stutter or my opposing behavior toward it; this was simply a Bad Fit all around and I have little regret over it.<p>The "good news" here is that at this company there's a higher up who has a severe stutter; people clearly have the patience [or tolerance, or professionalism, or whatever thesaurus equivalent you want to use] to have this guy around creating value. |
The unnecessary demise of Barnes and Noble | The writing is on the wall for Barnes and Noble, and I can't help but see what they've done here as mirroring what Circuit City did just before they shut everything down. IIRC, they ditched their most experienced sales-people, who were also the highest paid, and not soon there-after, they were done.<p>That said, I don't entirely agree with the assessment of the author. She <i>sounds</i> like one of the employees that was let go and her view of things might be colored by that. She seems to be over-valuing the contribution these employees made to the bottom line of the business. She may or may not be right; I can't, personally, speak to that -- where I live, there hasn't been a Barnes & Noble nearby in a long time. In fact, the place I work is <i>located on the second floor of what once was a Barnes & Noble</i>. When there <i>was</i> one a couple of miles from my home, I rarely purchased anything from them other than coffee.<p>This is anecdotal, of course, but I don't know that pulling employees from the floor and putting them to work boxing and shipping orders from the store was such a bad idea. If it did, indeed, decrease shipping time and costs, it would have been a <i>fine</i> use for that staff assuming they could find enough people to buy books online when those people could often save money at Amazon. In the <i>decades</i> that I did spend time at Barnes & Noble, my interactions were limited to the "kinda-Starbucks" folks and the (rare) store teenage cashier. Maybe I am a very specific customer[0]; I went there because of the selection of obscure programming books. I'd often need something not available at my library. They'd have a few options and I wanted to review them before having the library order it.<p>I buy the argument that it's not necessarily Amazon that was B&N's downfall -- it hurt, a lot, I'm sure and I know former B&N customers who now exclusively purchase via Kindle -- but there are a lot of people like me, who didn't buy books from <i>either</i> place. In a twist of irony, I own a second-generation Nook with the e-paper/color screen. I have never owned a Kindle.<p>Here's my take on where B&N fell over:<p>(1) Failure to recognize who the competition actually was. Amazon needn't be mentioned; it's obvious. But the big one for me was my local home-town library. They are just as close, a little bigger, just as nice (sans Fivebucks coffee), but I don't have to spend <i>anything</i> there and being a speed reader (skim/scan style, not the gimmicky nonsense), I get through my book during the borrowing window. They also have an automated self-checkout and self-dropoff, so I wait in far fewer lines. So I'm already inconvenienced by having to <i>gasp</i> drive somewhere, but my library handles the experience <i>entirely better</i>.<p>(2) Very expensive real-estate: I mentioned that my employer is in a former Barnes & Noble location. Our office is in an incredibly desirable area. I have no idea what the cost is, but I know that homes with fewer bedrooms than I have and half the square footage go for twice the price of my home (and homes just a few miles up the road) in <i>less</i> desirable parts of this suburb. The stores are very pleasant to be in, with expensive furnishings in expensive, convenient areas. These things don't come cheap. There's a reason the mom-and-pop book-sellers are located on the dumpier side of town, often have "use other door" signs posted on 8.5x11 paper with ink-jet Comic Sans all caps print and the books are jammed together on shelves stacked floor to ceiling separated by aisles that are barely large enough to meet ADA requirements... books are low-margin items. Every time I visited a Barnes and Noble for Fivebucks coffee I wondered how, on earth, they stayed in business.<p>(3) Staffing costs had to be high. Ignoring the whole thing about the CEO and executives making stupid amounts of money[1], the staff requirements to provide the level of customer service required of those customers who <i>do</i> need help, especially those who purchased a Nook and mistakenly thought that driving over to B&N would be helpful in solving their problems, is impossible to do profitably. And I don't think that the quality of their staff would have been the same as the mom-pop bookstores described in point #2. The few mom/pops that I've been to are staffed by book-obsessed individuals; those individuals are often also the owners of the place, and the owners of the place are often the only employees. Thinking back about a decade, when I last went to a B&N, I'm not <i>entirely</i> sure that I <i>never</i> needed help, but I am sure that I <i>never</i> received any. I don't ever recall seeing an employee anywhere <i>other</i> than at the coffee shop or behind the counter ringing people up. Where they <i>had</i> employees, they had to <i>few</i> of them. It always took <i>forever</i> to get out of there with a (rare) purchase; even a (less rare) coffee purchase. I think they are the record holder for me of saving me money through waiting . . . how bad did I want that coffee? 15 minutes, bad? Or that 2600 magazine that had the cool payphone photo that I wanted to keep; do I really want to wait in line behind those 9 other people for the one cashier who's ringing everyone up in their (pointless) common-feeder line?<p>(4) Inventory costs had to be high. I went to B&N <i>only because I knew they had a huge selection</i>, yet I rarely purchased anything from that selection[2]. I would have never walked <i>into</i> the store if they didn't have that selection, though. It's a <i>lot</i> easier for Amazon to handle that problem. My local library, as well, is <i>massive</i> and actually larger than the B&N was (though they had a more limited programming book selection). Back to point #1, their competition has far fewer difficulties doing what they do.<p>Of the problems I can see, fixing any of the bottom three would be customer-hostile in (probably) equal ways -- different impacts, but would take enough away that they'd lose a lot of business. Staffing -- the third -- is the only one that can be done quickly and have an immediate impact on the bottom line. The problem, though, is that it's like borrowing money at a higher interest rate to pay off debt[3]. They'll lose high- and probably medium-maintenance customers, and they'll lose customers sympathetic to the employees' circumstances which will drive them into cutting more. It's a downward spiral and they're so close to the bottom, I don't see them having a chance at this point.<p>[0] Generally, I ended up there in search of books related to programming, networks and other computing topics. They had the largest selection and I could crack the books open and see if they were worth burning the time on, which I would often then do in their coffee shop over a few weekend days. I'd occasionally look over the fiction titles under topics I like, but I don't read fiction, I listen to it and I generally did that by grabbing the CDs or (<i>gasp</i>) tapes from the library.<p>[1] I'm not disagreeing; it's absurd given that they're failing at their actual job, it's just obvious enough, IMO, that it doesn't require more detail.<p>[2] PSA- I feel that way about Microcenter, somehow they have almost everything that I can find on Newegg, and they manage to be nearly competitive, price-wise, with them. I don't want them to vanish, though, so I make it a point to do as much of my geek spending there, in-store, even if they are a little pricier; plus ... instant gratification is nice.<p>[3] Or, as is said all too often by people who aren't thinking about what it is they're actually saying, the equivalent to "digging yourself out of a hole". |
Ask HN: Is it 'normal' to struggle so hard with work? | I have a bit of similar problem. I have a bachelor's degree in CS, but I'm not passionate about programming. I mean, I like it, but in the way that people like strawberries. I enjoy it, but 3 courses a day is too much, or several days in a row with no breaks.<p>I used to loathe myself for not paying enough attention to math. I recently tried to apply for a "backend engineer - data scientist". I read about the "R" programming language they value. It's like math with a programming language strapped on, not a programming language optimized for math. Which reminded me why I never got very far with math, even though I once had a 2nd place in primary school contest and received a monetary reward. I don't usually enjoy math if it's without a clear purpose. Like, I need to understand more math to write some target prediction code in a game. Math for math's sake, like statistics, matrix operations, higher algebra - these things bore me. I can understand them if I stare at the math symbol soup long enough and keep taking it apart, but my mind drifts away and I struggle to focus. This is despite using various motivation tricks and even meditation.<p>But I need to come to grips with the fact I just don't enjoy math that much, despite the high hopes my math teacher had for me. She even got me tested for mensa (I'm way too low, IQ 120 and the threshold was 140 last time I checked).<p>I enjoy Python, I enjoy figuring out how to program something in Rust, I like writing documentation, reading articles, arguing with people, analyzing stuff (I've spent wayyy too much time analyzing patch changelogs for games I never played, or developing creative strategies for games). I like drawing a bit, music - a lot, I like animals, physical exercise, shooting bow, strategic board games with relatively low randomness. Many other things. It seems like my skill points are spread over many areas but I'm not great at either of those. I admire people of Renaissance like Leonardo da Vinci, but this doesn't translate into finding an appropriate job very easily. These days specialists are highly prized, and inter-disciplinary knowledge is harder to use unless you have a brilliant idea.<p>I started observing myself and my feelings closer, and I encourage you to do the same. Pay attention to what activities you enjoy, what you don't, and be honest with yourself. Even if you've already missed an opportunity, with good self-knowledge you can find another one. For example I'm a nerd, I have an affinity for technology, history, archeology, animals, knowledge in general. I read a lot. I like sharing knowledge with others. Maybe I could make a good teacher if I had better grades, or a scientist. But I get my fulfillment by... teaching people new board games, or teaching them how to work out without getting injuries.<p>I'm pragmatic. I think programming and IT is a good, interesting and well-paid job. But I don't breathe it. I work to live, not live to work. I try to explore different aspects of life and acquire new skills. People in the past used to get satisfaction from many different places. Many famous people were mathematicians, painters, journalists, travellers etc. all at the same time. They were losing money in some of these activities, but did it for enjoyment.<p>Observe your feelings. Think of the last time you enjoyed doing something. Take notes on what activities bore you and what give you a rush. If you identify several activities you enjoy, play connect-the-dots. Example:<p>I'm analytical, patient, have some affinity for art, I like animals. Origami is all those things.<p>Another example, how to reverse engineer your interests: I enjoy people that keep surprising me, movies that are not very straightforward, books by Philip K. Dick, fantasy, sci-fi, old maps (the weirder, the better), archeology, lost civilizations, the cartoon Moomintrolls. What do these things have in common ? Answer: MYSTERY.<p>Pay attention to what people are saying about you, especially your skills, interests and what you have an affinity for. People have observed that I happen to like badgers quite a lot. And other animals, that I should perhaps work in a ZOO. I've been also called a philosopher a couple of times, including in high school, and I managed to instantly spot some that weren't labeled as such. One person told me I have a scientist mentality. I'm also a bit of a contrarian (philosophers delighted in trolling!!!). See, I'm a lame version of Paul Graham.<p>Of course, finding an activity you enjoy and one that you can get reasonable money for is two different things. That's why "e-sports" (computer game tournaments) and game development are not well paid activities.<p>Try to think which aspects of being a freelancer you enjoy and which you loathe. When you're happiest, when most burnt out. Debug thyself. |
A320-X DRM: What happened | Gotta say, on reading this I went <i>"They did what?!"</i>. This raised my eyebrows so much, I'm fairly certain they're now lost somewhere in my hairline. In the US, at least, I'm fairly certain they've broken more than a few laws. Kudos to them for admitting it, entirely; I guess that would be the only hope they'd have of ever re-gaining any sort of trust with their install base, but they'd be equally smart to find a lawyer[0].<p>I get that a <i>lot</i> of time and effort goes into developing a product and it's <i>really</i> frustrating when you find out that your product is being pirated. I can't imagine if I'd happened upon a whole community sprung up around pirating my product; they had to be incredibly angry and this likely led to this terrible idea. And no matter how often you repeat to yourself that "piracy does not represent lost sales", when you've poured your time into something -- time that you hope will make you a nice living, time that you took away from your family or other enjoyable pursuits -- you tend to get <i>really angry</i> when you find out there are people that think it's perfectly OK for you to work for free.<p>I like to repeat the mantra that "piracy does not equate to lost sales" and tell myself that those wouldn't be paying customers anyway, or they're not my real target audience, or that it speaks to the popularity of the product if someone went to the trouble to crack it. And I'm a believer that effort spent on DRM is wasted (especially efforts like this). It's not entirely, true, of course. I always think back to the story I read a few years ago about the TCP/IP stack that nearly every DOS PC used -- a piece of shareware that, at the time, was probably the most popular piece of shareware in existence. I'm sure I, like many, didn't even think to pay for it and operated under the assumption that large corporations were probably using it and the developer was probably using $20 as kindling by now when in reality I think he netted somewhere in the <i>thousands</i> of dollars for his efforts.<p>At the same time,... well... <i>this</i>.<p><i>This</i> is exactly what happens when you focus on piracy so hard. Developer time is a finite resource and this company wasted that time developing a piece of DRM that is indistinguishable from malware. It succeeded in <i>not</i> stopping the pirates, <i>not</i> catching the pirates, angering their <i>paying</i> customers, easily exposing them to civil legal issues and possibly exposing them to criminal legal issues. That little bit of wasted developer effort could very well <i>end</i> the company that made this product in a way that piracy probably never would have. Assuming their customers <i>like</i> the product and would like it to continue to exist, this company basically did everything in their power to <i>not</i> serve their customers.<p>To be clear, I'm not completely against purchase verification in software products. If it's light-weight, and doesn't get in my way as a customer, it's fine (i.e. provide the ID/password used when it was purchased with a fallback to an offline serial number ... asked <i>one time</i> and <i>never again</i>). I get it. A small road block is enough to keep my mom or dad from grabbing a copy that a friend attached to their e-mail. Heck, in the case of my mom or dad, they may not even <i>realize</i> that it's not a free product if it <i>doesn't</i> ask for some form of verification. I don't mind how Steam works or how the variety of stores handle these sorts of things. If your DRM effort goes any beyond this, it's wasted effort. You're not going to stop a determined pirate even (especially?) if the product your selling <i>is an anti-piracy product</i>. Just don't. Don't waste the effort. It's <i>never</i> worth it.<p>Even as I write this I'm still <i>amazed</i>. I get frustrated when I upgrade my CPU/memory/GPU and <i>Office</i> won't run without some extra steps. I can't <i>imagine</i> if step #2, after falsely identifying me as a pirate, was "send a bunch of personal data to the authors"[1] so they can turn me in to the authorities. Pro-DRM folks like to equate piracy with theft, so I'll make an equally poor analogy and say that'd be like if I purchased bed sheets at Wal-Mart, and the processor in those sheets[2] decided I stole them, so they started sending the GPS location of my house along with pictures of my bedroom to corporate so that they could turn that information over to the police.<p>[0] Sure wouldn't be difficult to compare this with any other piece of malware in its behavior, but IANAL.<p>[1] And yes, I realize that they've stated that they're looking for specific information from a specific pirate that they consider to be the <i>source</i> of the problem, but including that payload in the installer makes me question the truth of this statement. I don't have any reason to dis-believe them, especially considering they've basically written up a post admitting to a bunch of activity that may very well be illegal in nature, but having no way to verify that they are telling the truth, or that there isn't a circumstance that could false-positive flag someone who <i>isn't</i> that very specific case, I will err on the side of assuming the worst in this case.<p>[2] I laughed when I wrote that, then I thought ... there's probably already sheets with processors in them. If there isn't, there will be. Shortly followed by the first case of DDoS by IoT bed-sheets. |
Apple Maps vs. Google Maps vs. Waze | Glad to see that my anecdotal experience with Google Maps and Waze turned out to be correct[0]. I've noticed, consistently, that Waze underestimates the drive time that it quotes and I've also had the sneaking suspicion that the routes weren't <i>perfect</i> (though I had no expectation that they would be; there's a lot that can go wrong on a 40-minute without traffic commute).<p>I think one of the reasons for Waze under-estimating route times has to do with it not properly correlating traffic speeds provided by its many users against the route selection. For instance, my route involves changing from a local highway that ends to an interstate where the left two lanes are the lanes you have to be in to get where I'm going. These lanes are often moving at around 5 MPH on average whereas the other side moves at around 60 MPH. Waze will indicate that the freeway is moving at an average of 40 MPH and will leave me on that road, rather than taking an easy, single-light traffic-free <i>shorter</i> route around the problem. I've never spent more than 5 minutes navigating the route-around, but I've had times where staying on has taken over 20 minutes (and 5 minutes would be <i>fast</i> most days). It <i>rarely</i> suggests the route-around (despite Waze having introduced it to me). It isn't intelligent enough to know that if I'm taking that exit, I'm the 5 MPH traffic, not the 60MPH traffic. It's either not taking into account that my route would result in slower traffic than average due to lane position, or it <i>is</i> and the other Wazers it's basing its estimates on are less conservative drivers than me -- I tend to get in the lane I need to be in early rather than flying around traffic and pushing my way in. It's got to be a <i>really</i> hard problem to solve, in general. Padding route estimates or estimating toward the worst-case scenario would probably be a good idea.<p>That said, Waze is the <i>only</i> app that has actually <i>hooked</i> me into using it on <i>every</i> trip, <i>every</i> time. The Waze app gives me speed limits and warns me when I am exceeding them by a pre-set amount[1]. It's also pot-hole season where I live and due to the wide user base of Waze and its hazard reporting, I get a warning (well, lately, hundreds of warnings) when there's one up ahead. By far the biggest benefit, though, is the police warning. I don't <i>intentionally</i> speed; I generally subscribe to the rule of "keep up with traffic" while keeping to the right-most lanes[2]. That said, keeping up with traffic has gotten me pulled over once (no ticket due to a decade of not having a ticket and my only exceeding the speed limit by barely 10 MPH).<p>In the last <i>year</i>, Waze has beaten me spotting the hidden police car every single time. I went on a 3-hour drive with a close friend who has one of those supposedly "covers everything" radar detectors and on that trip Waze spotted four more cops than his radar detector[3] did and had one partial false positive (he had moved ahead about a mile, so the app showed two cops instead of one) whereas his detector went off so many times in the city that he had to silence it. The avoidance of traffic fines and added insurance cost, alone, makes it worth it[4].<p>I'd say, for me, I started using the Waze app on the promise that it would get me around some of the unexpected traffic difficulties in my daily commute, but the reality is that after a few months, I know the areas involving my commute far better than Waze or Google. Both navigation apps have made me a slightly more intelligent driver. I'd often skip-off the freeway for surface streets when things got slow and was surprised when navigation apps wouldn't <i>suggest</i> to me to do that. It turns out that the time spent at traffic lights in traffic well exceed a slow freeway. It <i>feels</i> faster, sometimes, to exit the wall-to-wall 15 MPH slog, but you trade it for periods of 40 MPH traffic mixed with longer periods of just sitting idle.<p>But the bigger eye-opener that Waze seemed to do, which Google Maps didn't -- when there were lane closures, it'd have me enter the freeway at the first on-ramp <i>before</i> the lanes re-opened, rather than at some point <i>after</i> the lanes re-opened. And every time I ignored it, I ended up in some creative form of traffic hell. After thinking it through, I realized that the first on-ramp after traffic re-opens is often <i>flooded</i> with everyone else who's trying to avoid the construction. The last on-ramp just <i>before</i> the lanes re-open is not similarly flooded and yet traffic on the freeway is moving at about 60% of the speed-limit at that point and only getting faster as it reaches the re-opening. Since the last merge bottle-neck is that on-ramp and it's not clobbered with diversion traffic, it's a very good workaround.<p>[0] I don't own Apple products, so these are the two "big choices" on Android.<p>[1] I bumped it up from warning at the actual speed limit because driving the speed limit on some of the roads I travel is more likely to cause death-by-someone-elses-road-rage.<p>[2] Where I live, two-lane roads require drivers to remain in the right lane and use the left lane for passing and this rule is enforced ... just about never. Multi-lane roads are fair game with most remaining in the middle one or two lanes for the entire duration. I'm not even sure what the law is for these but if I were to judge based on how 90% of people drive, I'd say the law is to <i>never</i> be in the right lane, and that the left lane is for people traveling more than 20 MPH over the speed limit. We have no HOV lanes.<p>[3] I can't write that without thinking Radar Detector Detector Detector Detector <a href="https://rationalconspiracy.com/2016/06/02/radar-detector-detector-detector-detector-almost-certainly-a-hoax/" rel="nofollow">https://rationalconspiracy.com/2016/06/02/radar-detector-det...</a><p>[4] "Well, you can avoid that by just not breaking the law." Except that traffic laws are among the few laws that are easily broken unintentionally. |
Preparing for Malicious Uses of AI | I believe much of this can be dealt with, if we start <i>RIGHT NOW</i> to address a serious issue in our systems - the lack of a way to represent Morals and Ethics (the When and the Why) in the systems we are building. This needs to provide important input to, and thus shape the DIKW(D) pyramid.<p>I've been doing some work with the IEEE on this - and I'm looking here on ycombinator to get some real-world feeedback on what people are thinking and concerned about.<p>I have some (personal) ideas that might work to address the concerns I'm seeing.<p>{<i>NOTE</i> Some of this is taken from a previous post I wrote (but kinda missed the thread developing, I was late I don't think anyone read it). It is useful for this thread, so a modification of that post follows.}<p>First, I think you need a way to define 'ethics' and 'morals', with a 'ontology' and a 'epistemology' to derive a metaphysic for the system (and for my $.02, aesthetics arises from here). Until we can have a bit of rigor surrounding this, it's a challenge to discuss ethics in the context of an AI, and AI in the context of the metaphysical stance it takes towards the world.<p>This is vital, as we need to define what 'malicious use' <i>IS</i>. This is still an area (as the thread demonstrates) of serious contention.<p>Take sesame credit (a great primer, and even if you know all about it, it is still great to watch: <a href="https://www.youtube.com/watch?v=lHcTKWiZ8sI" rel="nofollow">https://www.youtube.com/watch?v=lHcTKWiZ8sI</a> ). Now here is a question for you:<p>Is it wrong for the government to create a system that uses social pressure, rather than retributive justice or the reactive use of force, to promote social order and a 'better way of life'?<p>Now, I'm not arguing for this (nor against for the purposes of this missive), but using it as a way to illustrate that different cultures, governmental systems, and societies, may have vastly different perspectives on the idea of a persons relationship viz a viz the state when it comes to something like privacy. I would suggest that transperancy in these decisions is a good idea. But right now we have no way to do that.<p>I think the current way the industry is working - seemingly hell-bent on developing better, faster, more eficient, et al ways to engineer Epistemic engines and Ontologic frameworks in isolation is the root cause of the problem of malicious use.<p>Even the analysis of potential threats (from the article referenced 'The Malicious Use of Artificial Intelligence:
Forecasting, Prevention, and Mitigation' - I just skimmed it so I can keep up with this thread, please enlighten me if I'm missing something important) only pays lip service to this idea. In the executive Summary, it says:<p>'Promoting a culture of responsibility. AI researchers and the organisations that employ them are in a unique position to shape the security landscape of the AI-enabled world. We highlight the importance of education, ethical statements and standards, framings, norms, and expectations.'<p>However in the 'Areas for Further research' section, I would point out that the questions are at a higher level of abstraction than the other areas, or discuss the narrative and not the problem. This might be due to the authors not having exposure to this area of research and development (such as the IEEE) - but I will concede that the fact that the note about the narrative shows that very few are aware of the work we are doing...<p>This isn't pie-in-the-sky stuff, it has real-world use in areas other than life or death scenarios. To quickly illustrate - let's take race or gender bias (for example the Google '3 white kids' vs. '3 black kids' issue a while back in 2016). I think this is a metaphysical problem (application of Wisdom to determine correct action) that we mistake for an epistemic issue (it came from 'bad' encoding). This is another spin on kypro's concern about the consequences of AI deployment to enable the construction of a panopticon. This is about WISDOM - making wise choices - not about coding a faster epistemic engine or ontologic classifier.<p>Next, after we get some rigor surrounding the ethical stances you consider 'good' vs. 'bad' (a vital piece that just isn't being discussed or defined) in the context of a metaphysic - you have to consider 'who' is using the system unethically. If it is the AI itself, then we have a different, but important issue - I'm going with 'you can use the AI to do wrong' as opposed to 'the AI is doing wrong' (for whatever reason, its own motivations, or it agrees with the evil or immoral users goals, perhaps, and acts in concert).<p>Unless you have clarity here, it becomes extremely easy to befuddle, confuse, or mislead (innocently or not) questions regarding 'who'.<p>- Who can answer for the 'Scope' or strategic context (CEO, Board of Directors, General Staff, Politburo, etc.)<p>- Who in 'Organizational Concepts' or 'Planning' (Division Management, Program Management, Field commanders, etc)<p>- Who in 'Architecture' or 'Schematic' (Project Management, Solution Architecture, Company commanders, etc)<p>- Who in 'Engineering' or 'Blueprints' (Team Leaders, Chief Engineers, NCO's, etc.)<p>- Who in 'Tools' or 'Config' (Individual contributors, Programmers, Soldiers, etc.)<p>that constructed the AI.<p>Then you need to ask which person, group, or combination (none dare call it conspiracy!) of these actors used the system in an unethical manner? Might 'enabled for use' be culpable as well - and is that a programmer, or an executive, or both?<p>What I'm getting at here, is that there is both a lack of rigor in such questions (in general in this entire area), a challenge in defining ethical stances in context (which I argue requires a metaphysic), and a lack of clarity in understanding how such systems come to creation ('who' is only one interrogative that needs to be answered, after all).<p>I would say that until and unless we have some sort of structure and standard to answer these questions, it might be beside the point to even ask...<p>And not being able to ask leads us to some uncomfortable near-term consequences. If someone does use such a system unethically - can our system of retributive justice determine the particulars of:
- where the crimes were committed (jurisdiction)
- what went wrong
- who to hold accountable
- how it was accomplished (in a manner hopefully understandable by lawmakers, government/corporate/organizational leadership, other implementers, and one would think - the victims)
- why it could be used this way
- when could it happen again<p>just for starters.<p>The sum total of ignorance surrounding such a question points to a serious problem in how society overall - and then down to the individuals creating and using such tech - is dealing (or rather, not dealing) with this vital issue.<p>We need to start talking along these lines in order to stake out the playing field for everyone <i>NOW</i>, so we actually might have time to address these things, before the alien pops right up and runs across the kitchen table. |
Ask HN: How do you teach you kids about computers and coding? | I think it depends on your objectives. What abstractions are you trying to have your kids develop?<p>* Are you trying to teach your kids simple blinky-light programming?
* Maybe you want them to develop into C++ coders?
* Do you want your kids to understand underlying computer architecture constructs (bits, bytes, cores, caches, etc.)?<p>I learned LOGO as a 7 year old & Lisp shortly thereafter. Later we got a TRS-80 w/ a built in BASIC interpreter and out of a mix of necessity and curiosity taught myself Z-80 Assembly and eventually started playing around with ALUs & 7400 series logic on breadboards.<p>My offspring isn't interested in the low level details, but was VERY interested in making art on the computer. We started with MS-PAINT-a-likes, moved on to Gimp & eventually to Blender, using Python to automate some processes.<p>So... my advice is to think about what you're trying to accomplish. If your kid is interested in low level stuff, maybe start with a soldering iron. If they like art, think about the tools they'll use to model the abstractions they want to work with (Gimp, Blender, Maya, Adobe CS, etc.) and then look at the languages that are easy to integrate w/ those tools.<p>Also, remember that modern spreadsheets are a powerful data-flow modeling system.<p>Modern educators tend to follow the Constructivist model, a simplification might be: "Give your kids some tools and see what they do with them, possibly nudging them towards techniques or solutions when they identify a specific intent or goal." Constructivist educators would probably tell you the interaction between the learner, the educator and the tool is probably more important than the specific tool.<p>Constructionalism takes constructivism a bit further and seeks to support the creation of mental models and abstractions by the learner. LOGO, eToys & Scratch were built by teams that were largely influenced by Seymour Papert & his constructionalist ideas. In education circles, there's a little bit of debate over constructionalism, but it's definitely produced some great tools. And heck, I've had a pretty decent career as a software engineer, so learning LOGO at MIT as a 7 year old in the early 70s seemed to have worked out well for me.<p>I'm personally less of a fan of Scratch than I am of TI LOGO running on an old TI-99/4, but that's probably because I find Scratch MUCH too cluttered. But YMMV. Scratch is definitely worth checking out.<p>So to sum up my advice:<p>* try a bunch of things to figure out what your kids interests are.
* understand the tools you present your kid(s) with so you can help solve problems when they're blocked.
* show your kids a few simple things and let them discover the rest (and then unblock them if they get in a jam)
* if your kids are mentally checking out, it's time to try a different approach.
* don't be afraid to let your kids "waste time" on things they enjoy, even if it doesn't immediately seem like they're learning something (my offspring knows boolean logic ONLY because we found some youtube videos about redstone circuits in minecraft.)<p>Also, you can buy a copy of Papert's "Mindstorms" book online (or heck, it might be available as a free download by now somewhere.) It's a tiny bit controversial in education circles, but it's relatively short and it's a decent intro to Constructivist ideas.<p>So... all that being said... I was pretty impressed with how cheap & fast it was to set up Scratch on a RasPi w/ Raspbian. Under $200 even with a new mouse, keyboard & video screen.<p>The LEGO Mindstorm products all seemed to be a little overpriced to me, but if you don't want to spend the afternoon hobbling together a cheaper educational robot system, they're probably not that bad. Seymour Papert was really big on getting kids to interact with physical objects, so if you buy into Constructionalism, this is definitely the tool for you.<p>I'm a fan of the LittleBits electronic toys. My offspring ignored them after five minutes, but I've seen other kids spend hours putting blinky light creations together w/ them.<p>If you're handy with a soldering iron, it's a snap to put one of these pencil circuit tools together. I personally love it. My offspring liked it for a while before disappearing into the purple cloud of sullen adolescence. It was also a bit of a hit w/ my niece.<p><a href="https://makezine.com/projects/build-electronic-audio-game-pencil-paper-conductive-ink/" rel="nofollow">https://makezine.com/projects/build-electronic-audio-game-pe...</a><p>I tried to make sense of the Dash robot, but honestly, I couldn't figure out how it was supposed to be educational. YMMV.<p>Ditto for Cubelets. At $26 per cube, they were a little pricey for a system that seemed to come with waaay too few instructions.<p>I really want to try the makeblock drone airblock system, but I'm probably projecting my own desires on this one. Still, I used an early version of the makeblock neuron system and it was at least as good as the littlebits (though littlebits in-box documentation was better.) |
Ask HN: I'm writing a book about white-collar drug use, including tech sector | I'm a male in my mid-40s. I own a web development shop, code daily and employ about a half dozen people. Relatively healthy, although should work out more than I do. Long history with stimulants and work.<p>These days my main stimulant is coffee, but it makes me very jittery. I go through phases, maybe weeks to months at a time where I ditch coffee and take small doses (< 5 mg) of Adderall. I think I'm starting one of those phases now.<p>This morning I was stressed about a new client kickoff, so I took about 5mg of Adderall IR, along with the rest of the dozen or so supplements that I normally take. But I didn't drink my normal morning coffee and now I have a wicked headache. I hate overdoing it on the Adderall, because it changes my personality too much--makes me cold and distant from my friends and family.<p>My last startup, with big time VC funding--the whole startup team, myself included were heavy daily users of Adderall. It was an open secret. The CEO was the worst, with a 40-50mg/day habit. The COO was over the top neurotic, I'm sure made worse by her Ritalin habit. Biggest victim in that situation? Empathy and true friendship.<p>My last startup before that one, was a YC-backed bay area with a CEO also on mega doses of daily Adderall. This guy was the worst person I've ever worked with, and that is quite an accomplishment. He didn't sleep and insulted everyone on a regular basis. Would criticize everyone behind their backs about not working enough.<p>My own stimulant use and abuse dates back to my very first startup when I was just a programmer, during the first dotcom boom. My partying friends introduced me to meth, and I rationalized taking it based on the fact that it is available in prescription form, so "how bad can it be?" and that also I was clearly suffering from extreme ADD.<p>I was a daily user for over a year and a half. Would crush up the crystals and snort them. Was terrified of smoking or injecting the stuff and never tried it that way. Totally rationalized my use as necessary for the insane hours my team was putting in at work. 60-70 hours per week were normal. On some weekend I'd go out and party all night and not get any sleep. Eventually as the needed dose went up, I started losing a lot of weight, was down 100 pounds from my peak at one point. People were definitely noticing I was strung out although nobody ever really said anything at work. My close friends were aware and supportive. Some of them would do it sometimes too. I quit when it seemed my girlfriend was going to start using too--she wanted to be skinny and beautiful and popular, in her words. My conscience hit me too hard in realizing what was about to happen with that particular situation. Also I didn't really recognize my face in the mirror anymore. I flushed about 500 dollars worth down the toilet and quit cold turkey, knowing that I wouldn't be able to afford to buy more even if I wanted to do so.<p>My boss fired me about 2 weeks later. Said something had changed, that I had lost my spark. I'll never forgive that asshole for that. He knew what was going on and didn't care. It was only fine because my next job had me flying all over the place and making new friends, which made quitting a lot easier than it would have been otherwise.<p>I still miss meth sometimes. It gave me invincibility. I would code for what felt like days at a time, and all the practice really let me make strides in my skills very quickly. For a brief period of time I was superhuman, but there was almost always a crash. During those days I was using a lot of other drugs too and going out partying a lot, it all kind of blurred together. I barely remember it now more than 15 years later.<p>Quitting meth left me paranoid about getting hooked again for quite awhile afterwards, but after maybe 2-3 years realized that the ADD is real and that I really did need some meds to cope. I went to the shrink and easily got a script for Adderall. The first time around I remember that I was on it daily for almost a year before realizing that I didn't like who I had become. It made me cold and unloving and I hated it. I quit again. Ever since I've only taken it sporadically, when I really really have to focus something that I really really don't want to focus on.<p>Don't really have any regrets. Drugs should be legal--all of them. People should be able to decide how to use them and live with the consequences. Sometimes they're not that bad. Just think of how many people do coke on a regular basis and don't really have any ill effects. It just comes with the lifestyle of going out and staying out til dawn. And then eventually you grow up and don't want to do that anymore. At least that's what I've seen amongst my friends. |
Ask HN: I'm writing a book about white-collar drug use, including tech sector | One thing I think is important is that people have such a wide array of backgrounds, I think it's a mistake to simply categorize them as white-collar, without examing how they got to where they were.<p>I've been in IT for years, but before that I was in the military, saw combat, and developed PTSD. IT is just one of the things I'm good at that pays the bills, but that alone wouldn't tell you my story of drugs.<p>I got out and immediately used my GI bill to go to uni. I was drinking <i>a lot</i> at the time though. I got up to about a fifth every two days. This was for a lot of reasons. The transition from infantry to civilian life so quickly was rough. I was the oldest person in most of my classes of smartass 18 year old rich kids who mommy and daddy paid their way to college and bought them vehicles etc. I became bitter about the war and started researching it further and fell further into disgust because I realized how much of it was a lie. I made some ROTC cadets cry because I snapped when I saw them doing a "patrol" on campus with poor dispersion and yelled at them. I failed out of a math class... things were generally not headed in a good direction.<p>Then the tables turned, and it was all thanks to cannabis. See, I was always brought up anti-drug, so I was one of the few outliers who literally never did drugs in high-school of any sort other than alcohol and caffeine. As part of my military goals of getting and maintain clearance I also was very strict with myself about drugs, always feeling contempt and disgust for those who particpated in them. (not realizing the hypocricy and double-standard when it came to tobacco and alcohol).<p>So it wasn't until after two semesters of this downward spiral that a friend pulled me into his room, and said he wanted to talk to me about cannabis. He said he knew my position, but that being scientific minded I should read up on the scientific facts known about it and then read some first hand experience articles from a place called erowid. After a few weeks of reading every scientific paper I could get my hands on, and reading a few erowid posts from other PTSD military types, I finally decided I was a civilian now, I could give it a shot.<p>Cannabis literally saved me from becoming (or staying) an alcoholic. It was an almost instant turn around. I found being drunk disgusting. I stopped smoking tobacco completely. (Smoked a pack of cigs a day in-country, quit cold turkey when I left Iraq for the last time, but continued to smoke cigars/pipe tobacco every once in a while.) I found myself able to process things better. I was getting sleep I had before had to be drunk to get (hypervigilance is a bitch). I was being nicer to other students and not as standoffish, and started making friends.<p>I eventually quit college to do a startup that I left a few years later but that is still going strong. I stayed in the industry and have been in IT since. This sounds like a success story right? That's not all.<p>What I didn't tell you was I lived in the bible belt, in a state where it was/is illegal. I started having issues with the demographic... that is to say the people. I started having issues despite fairly regularly smoking cannabis, and started drinking again at the same time (cross-fading). I eventually got so fed up with the area I was in I jumped ship and moved to the east coast, where I had a support structure, to try to finally get help from the VA.<p>The VA was horrible. They put me on shit that made me feel like a zombie (trazodone in particular, but all of it did it.) Finally I realized the VA was not helping, (the most help I got was from learning how CBT and CBET worked from an independent org), stopped all their bullshit drugs which I think are a <i>major</i> health problem, stopped drinking, and came back to my state. I didn't mean to stay. I just wanted to catch up with family and friends, and was intending to go to Seattle because cannabis had just been legalized, and then I met a that woman I fell in love with.<p>I ended up staying in my state, but my SO had a druggie brother and an alcohol dad, and many concerns about the illegality of it, so my usage of cannabis, no matter when or in what amounts, became a fighting issue. So I tried to quit (because love is more important than anything right?). What ended up happening though is that I fell into this vicious cycle where I would have a particularly rough time for a while and would give in, toke up furiously for a week or two, cause a fight, and then quit only to find myself drinking instead (even once started dipping for a few months as my <i>vice</i> to replace the others, and let me tell you dipping is nasty as hell). Then I'd be "good" for a while until I had another hard time (like a week of super bad hypervigilance at night causing very little sleep), and the cycle would repeat. Obviously this eventually ended the relationship.<p>I tried very hard to keep this all in my personal life though, and protect my professional life from those troubles. This whole time I was performing fairly well in my job(s). I could have been doing so much better, but I was surviving given the double life I was leading. Eventually the stress leaked though, and I ended up quitting my job with very little notice because of an unrelated unhappiness there (being underpayed).<p>So now I am now using the rest of my GI bill to go to an online school and finish my degree while I job search in legalized states so I don't have to deal with the paranoia surrounding it's legality. I determined it was my medicine and I deserve to have my medicine without feeling guilt or ashamed or being scared of being arrested.<p>That said, I wanted to test myself to see how bad things are as a baseline, so I am now two months drug free (minus caffeine) and despite the ol PTSD rearing it's head a bit, it's better than it ever has been and I intend to stay drug free until I make it to a legal state.<p>Right now my main internal discussion is if I am willing to be on a list of patients in a medical only state or if I find anonymity important enough to only go to a recreationally legal state.<p>That's my story. I hope it's useful to you in some way or interesting to others. |
Ask HN: I'm writing a book about white-collar drug use, including tech sector | I am a software developer and someone running their own project. I am a recovering stimulant addict, and a recovered cannabis addict.<p>And by stimulants, I mean caffeine in it's various forms. In the beginning when I was under 20, it started with coding through the weekends with the power of Pepsi. I used to vomit this black goo out of my body after quickly guzzling down a 1.5l bottle, and I thought it was just ok, the sugar and caffeine rush was worth it so I could code through the night. When I started coding professionally later in my life, when I was 28-29, I would go back to this connection and always drinking strong black tea while coding. Did this for many years and though of nothing wrong in it. At some point I started to gravitate towards green tea and mate. Yerba mate became my favorite stimulant for a long time.<p>I thought the caffeine gave me a way to get into the zone, to go deep in the code and continue that for hours and hours. What I didn't see in the long run until later, was this habit was eating away my core power, slowly and bit by bit, and giving me trouble sleeping, something that in combination with deep depression led to me to discover cannabis as a cure for those. Couple of years of daily cannabis use by vaporizing, and I thought it was the best thing ever. Now I could be more creative, focus better, and even get sleep. At this point when I was bored with my work, I would vape in the morning sometimes and go to work slightly stoned, thinking I was acing it. I didn't do this very often, but definitely during a period when I was thinking of quitting my job I was many times stoned at work, just trying to cope with it.<p>But slowly this habit started to have it's toll, I was becoming weak, addicted, I couldn't function without cannabis. I thought it would be easy to quit, but man, the habit and addiction is strong. Took me over 7 years of periodical on and off usage to really decide and see how continous use of cannabis made me make bad decisions, made me think I was creative and able, while I was making silly creative decisions and letting the drug affect my work.<p>Many times while high I would think my work and designs were really beautiful, but when looking at those sober I would see they were really not something I would make when I was sober. The drug had it's effect on my work, and it was not coming purely out of me, but through the filter of that drug. Cannabis also made me really sensitive to things, not being able to stand my own opionion, I really didn't care in that mood, I just wanted to escape to that bubble and be there.<p>I managed to get out of that habit and thought pattern with serious inner work. Meditation was the most strongest thing that stayed with me during those years. It helped me during all the ups and lows and I never needed anything external to do it, I could rely it being always with me. I wish somebody had taught me this skill, or told me about it with more persuasion. I always tought I was ADHD and that I couldn't focus on things long enough. Then I learned how to quiet my mind, with meditation. It calmed my mind down. It clears the plate so to speak when done habitually. And it's always with me. It's the best thing that has happened to me. Seriously.<p>Anyway, I'm still recovering from the stimulant use. While starting to work on my own project and startup thing, I went to coffee because I could go to a nice coffee place and work there without having a workplace, drink one cup of coffee. Later I would combine this with L-teanine to get the perfect combination of stimulation and calming down. Little did I know how addictive coffee was, man that is one strong drug right there. After one year I started getting panic attacks, something completely new to me, just from drinking one cup of strong coffee.<p>Anyway, long story short, caffeine started to put me really towards the edge and in the long run just drained my energy, I had to quit. Now, finally I have also gotten out of the psychological pattern of combining coding with caffeine, it took serious work but now I see that by being sober I have more focus and sustainable work time, if I am well rested and nutritioned. If the rest, nutrition or exercise is out of place, I have to fix that first, not try to push it with stimulants.<p>I've also worked with many entheogens and psychedelics during the years, which psilocybine mushrooms, LSD and ayahuasca being the most beneficial for me. I can't stress enough how much especially mushrooms and ayahuasca have helped me to heal myself from chronic depression. But those are again only tools, the inner strength is the one that carries through all of these external tools. In the software development field, I see many people taking LSD and other psychedelics, it's really common place, to get visions, to see how to build things, to heal one self to be able to contribute for making this world a better place. Heavy stimulants like amphetamines are not really talked about, but I feel many are doing that also, many with a prescription drug for their ADHD/ADD perhaps. |
Western science catching up to Indigenous Traditional Knowledge | The main advantage of Tradition is time. I think the areas where the article is most compelling is when it focuses on drawing knowledge from traditional practices that have been done for long period of time in a stable society (the bit about "clam gardens" in particular).<p>A traditional practice is a sequence of small, imprecise experiments extended throughout a large period of time. It's risk averse and only tweaks things slowly, but it has the benefit of probably not breaking everything when you do it, because if it was going to have catastrophic consequences it probably wouldn't have persisted for this long without anyone noticing. A 'traditional' system of medicine probably doesn't have the underlying principles exactly right, and it gets stuck in local optima, but it usually falls into the 'ineffective' category when it goes wrong rather than 'insane side effects.' Admittedly, tradition has issues when the underlying system changes rapidly and practices that made sense in the past no longer make sense to do, and there's a catch-up time that has to happen.<p>Science has advantages of being able to more rigorously and skeptically re-evaluate assumptions and to tease out underlying causes and principles. But at the same time it's also prone to human frailties in how it's conducted -- see the replication crisis. Taleb might call it a lack of 'skin in the game,' where researchers are institutionally motivated to publish whatever they can that gets them a p-value below 0.05 and at the end of the day they're probably not changing their own personal habits or practices based on what their research says (because the strength and weakness of science is that it finds ways to detach the researcher from the research). The 'danger of a single study' comes when initial findings become loudly reported and the general population (or rather institutional powers) who want to be Modern and Cutting-Edge and moving toward the Future and Progress will take the initial findings as a stamp of approval.<p>The main issue is when we use science not just for 'what are the facts?' but 'how should we live?' and apply our initial findings universally. Because society doesn't want to wait to make a change, and our scientific processes usually do not have the advantage of time that tradition does, we start to embody the long-term experiment into the culture. And what's worse is that we get pressure to adopt it more widely than may be prudent. Why would a government official only promote a new idea in a single isolated population when they can reap the benefits of the new science by pushing the idea onto the whole country? When it's right, it has great reward, but when it's wrong the costs are great. And so we got the 'low-fat' craze that has led to great costs and suffering, forcing these generational oscillations to try to get people back on track to something with more scientific support.<p>Scott Alexander's book review of "Seeing Like a State" [0] points to this same attitude of hubris when it came to the 'modern rational scientific' thinking of the High Modernists in architecture:<p>>First, there can be no compromise with the existing infrastructure. It was designed by superstitious people who didn’t have architecture degrees, or at the very least got their architecture degrees in the past and so were insufficiently Modern. The more completely it is bulldozed to make way for the Glorious Future, the better.<p>>Second, human needs can be abstracted and calculated. A human needs X amount of food. A human needs X amount of water. A human needs X amount of light, and prefers to travel at X speed, and wants to live within X miles of the workplace. These needs are easily calculable by experiment, and a good city is the one built to satisfy these needs and ignore any competing frivolities.<p>>Third, the solution is the solution. It is universal. The rational design for Moscow is the same as the rational design for Paris is the same as the rational design for Chandigarh, India. As a corollary, all of these cities ought to look exactly the same. It is maybe permissible to adjust for obstacles like mountains or lakes. But only if you are on too short a budget to follow the rationally correct solution of leveling the mountain and draining the lake to make your city truly optimal.<p>>Fourth, all of the relevant rules should be explicitly determined by technocrats, then followed to the letter by their subordinates. Following these rules is better than trying to use your intuition, in the same way that using the laws of physics to calculate the heat from burning something is better than just trying to guess, or following an evidence-based clinical algorithm is better than just prescribing whatever you feel like.<p>>Fifth, there is nothing whatsoever to be gained or learned from the people involved (eg the city’s future citizens). You are a rational modern scientist with an architecture degree who has already calculated out the precise value for all relevant urban parameters. They are yokels who probably cannot even spell the word architecture, let alone usefully contribute to it. They probably make all of their decisions based on superstition or tradition or something, and their input should be ignored For Their Own Good.<p>The result was ugly rectangles that no one wanted to live in, at the cost of destroying sections of cities that had grown organically over time to solve their local particular problems.<p>The experiments of reality have to be conducted no matter what, and sometimes those experiments cannot be sped up. Not everything about social effects can be revealed in a 12-week study, and sometimes 50 years or more are needed to reveal the negative effects of a policy. Are there ways to effectively contain the high-variance experiments of science to small populations while keeping most people on the low-variance experiments of tradition?<p>[0] <a href="http://slatestarcodex.com/2017/03/16/book-review-seeing-like-a-state/" rel="nofollow">http://slatestarcodex.com/2017/03/16/book-review-seeing-like...</a> |
Defending Scientism: Mathematics Is a Part of Science | Ok, there’s been some online debate, esp recently on Lobste.rs, about mathematical vs empirical evidence. Some people highly believe in formal proof for reasons that probably vary. Some say those methods are nonsense advocating for strictly empirical approach of experimental evidence. I’m somewhere in the middle where I want any formal method to have evidence it works in an empirical sense but I don’t see the concepts as that different. I think we believe in specific types of math/logic because they’re empirically proven. As I formulated those points & dug up related work, I found this submission that argued a lot of what I was going to argue plus some other points (esp on intuition) I wasn’t thinking about. So, enjoy!<p>If you’ve read it at this point, I’ll weaken that claim to mine that certain maths/logics are empirically valid by being relied on in hundreds of thousands to billions of uses. If they were wrong, we’d see massive amount of evidence in the field that the fundamental logical methods didn’t work. Matter of fact, some of these failing would make it impossible for opponents of formal methods to even write their rebuttals online given propositional logic and calculus are how mixed-signal ASIC’s work. ;)<p>So, I’m going to list some specific types of math with widespread, empirical confirmation as a start. Anyone else feel free to add to the list if it’s math/logic used in formal proofs that has massive level of real-world use with diverse inputs or environmental setups.<p>1. Arithmetic on integers and real numbers. Computers can be boiled down to glorified calculators. Arithmetic can help bootstrap trust in other things, too.<p>2. Algebraic laws. Logics like Maude build on these.<p>3. Calculus like integration and derivatives. Heavily used in engineering. Also, building blocks of analog circuits and computers implement these primitives.<p>4. Propositional and Boolean logic. Digital designs are synthesized, verified, and tested in this form. With all CPU’s shipped, it’s probably the most well-tested logic after arithmetic and algebra.<p>5. Some subset of first-order logic. There’s been a lot of experimental confirmation it works in form of Prolog and Datalog programs that do what the logic says they should. I’ll note the older Prolog work was a bit weaker since it was associated with lots of zealotry that led to AI winter. We can focus on working applications, though, of which there are many in commercial and FOSS. Although it has a niche, Prolog was pushed way out of it to implement diverse array of programs and environments. The SMT solvers also catch problems they’re supposed to on a regular basis. All problems I’ve seen investigated in these were implementation errors that didn’t invalidate the logic itself.<p>6. Set theory. Experiments with the principles seem to always work long as problems and solutions fit the simple modeling style. It’s pretty intuitive for people. It’s been used to model many problems, esp theoretical, whose applications and rules were reviewed by humans for accuracy. There is some use of it in programming in data structures and composition. Deployments haven’t invalidated the basic operators.<p>7. Basic forms of geometry. You can get really far in making stuff happen predictably if it’s modeled in or built with simple shapes. See Legos and construction in general.<p>8. Basic probability theory. There’s a ton of use of this in empirical research. They all seem to trust the basic concepts after their observations of the field.<p>One can do all kinds of hardware/software verification using 1-6 alone. I’ve seen 7 used periodically in applications such as digital synthesis and collision avoidance for guidance systems. I’ve seen 8 and similar logics used when the problem domain is imprecise. If those logics and their basic rules are empirically-proven, then the applications of them have high, empirical confidence if they’re using the rules correctly. At that point, most if not all problems would show up in the extra items with less or no empirical verification such as formal specifications of the problem and implementations/checkers for those logics. Formal methodists always note things like checkers are in TCB keeping them simple and small. Verified checkers also exist which we can further test if we want. Regardless, looking at these logics and their rules as theories with massive, empirical validation means we’d trust or watch out for the exact things the formal verification papers say we should. Everything in the TCB except the logical system and rules if it uses one of above.<p>The mathematical concepts themselves are as empirical as anything else. Probably more so given they’ve had more validation than techniques many empiricists use in experiments. The statisticians and experimenters arguing among themselves about the validity of certain techniques probably don’t have slightest doubt in basic arithmetic or algebra. It’s clear given they rely on math in their experiments to justify their own claims. Gotta wonder why they doubt these maths work only when the inputs are numbers representing programs. |
Ask HN: If you've used a graph database, would you use it again? | I'm an engineer that used to do RDBs for a long time. One day a customer of a friend came with an issue that was in my opinion impossible to solve with relational DBs: He described data that is in flow all the time and there was no way we could come up with a schema that would fit his problem for more than one month after we finished it. Then I remembered that another friend once mentioned this graph model called RDF and its query language SPARQL and started digging into it. It's all W3C standards so it's very easy to read into it and there are competing implementations.<p>It was a wild ride. At the time I started there was little to no tooling, only few SPARQL implementations and SPARQL 1.1 was not released yet. It was PITA to use it but it still stuck with me: I finally had an agile data model that allowed me and our customers to grow with the problem. I was quite sceptical if that would ever scale but I still didn't stop using it.<p>Initially one can be overwhelmed by RDF: It is a very simple data model but at the same time it's a technology stack that allows you to do a lot of crazy stuff. You can describe semantics of the data in vocabularies and ontologies, which you should share and re-use, you can traverse the graph with its query language SPARQL and you have additional layers like reasoning that can figure out hidden gems in your data and make life easier when you consume or validate it. And most recently people started integrating machine learning toolkits into the stack so you can directly train models based on your RDF knowledge graph.<p>If you want to solve a small problem RDF might not be the most logical choice at first. But then you start thinking about it again and you figure out that this is probably not the end of it. Sure, maybe you would be faster by using the latest and greatest key/value DB and hack some stuff in fancy web frameworks. But then again there is a fair chance the customer wants you to add stuff in the future and you are quite certain that at one point it will blow up because the technology could not handle it anymore.<p>That will not happen with RDF. You will have to invest more time at first, you will talk about things like semantics of your customers data and you will spend quite some time figuring out how to create identifiers (URIs in RDF) that are still valid in years from now. You will have a look at existing vocabularies and just refine things that are really necessary for the particular use case. You will think about integrating data from relational systems, Excel files or JSON APIs by mapping them to RDF, which again is all defined in W3C standards. You will mock-up some data in a text editor written in your favourite serialization of RDF. Yes, there are many serializations available and you should most definitely throw away and book/text that starts with RDF/XML, use Turtle or JSON-LD instead, whatever fits you best.<p>After that you start automating everything, you write some glue-code that interprets the DSL you just built on top of RDF and appropriate vocabularies and you start to adjust everything to your customer's needs. Once you go live it will look and feel like any other solution you built before but unlike those, you can extend it easily and increase its complexity once you need it.<p>And at that point you realize that this is all worth is and you will most likely not touch any other technology stack anymore. At least that's what I did.<p>I could go on for a long time, in fact I teach this stack in companies and gov-organizations during several days and I can only scratch the surface of what you can do with it. It does scale, I'm convinced by that by now and the tooling is getting better and better.<p>If you are interested start having a look at the Creative Commons course/slides we started building. There is still lots of content that should be added but I had to start somewhere: <a href="http://linked-data-training.zazuko.com/" rel="nofollow">http://linked-data-training.zazuko.com/</a><p>Also have a look at Wikipedia for a list of SPARQL implementations: <a href="https://en.wikipedia.org/wiki/Comparison_of_triplestores" rel="nofollow">https://en.wikipedia.org/wiki/Comparison_of_triplestores</a><p>Would I use other graph databases? Definitely not. The great thing about RDF is that it's open, you can cross-reference data across silos/domains and profit from work others did. If I create another silo in a proprietary graph model, why would I bother?<p>Let me finish with a quote from Dan Brickly (Googles schema.org) and Libby Miller (BBC) in a recent book about RDF validation:<p>> People think RDF is a pain because it is complicated. The truth is even worse. RDF is painfully simplistic, but it allows you to work with real-world data and problems that are horribly complicated. While you can avoid RDF, it is harder to avoid complicated data and complicated computer problems.<p>Source: <a href="http://book.validatingrdf.com/bookHtml005.html" rel="nofollow">http://book.validatingrdf.com/bookHtml005.html</a><p>I could not have come up with a better conclusion. |
Ask HN: How to self-learn math? | Rather than simply give you a list of resources or textbooks, I’d like to give you a broad “map” of the various domains of mathematics, this way you understand what you’re working towards. I’d also like to recommend how you can maximally optimize your self-study, as someone who mostly self-learned enough mathematics to be active in research. I think this meta-direction is just as important as the resources you choose to learn from.<p>Mathematics, in my opinion, can be divided in two very major ways ways as concerns pedagogy. First, most mathematics computation-based or proof-based. Math research in general is about proving things, and most “serious” math books after a certain level are almost exclusively about proving properties instead of calculating results. On the other hand, most applied mathematics is computationally inclined, and uses methods derived from research. Here is a simple example: I can ask you to calculate the square root of 2 or I can ask you to prove that it’s irrational.<p>It’s important for you to know what you want. Do you, for example, want to achieve theoretical mastery of linear algebra that subsumes e.g. solving linear equations, or do you just want to be able to execute the computational methods proficiently? As you get into higher mathematics the line here blurs, but different resources may still emphasize one approach or the other.<p>Now let’s talk about the domains of mathematics. Broadly speaking, we can divide them into algebra and analysis. More accurately, we can divide their methods into algebraic or analytic. Algebra is concerned with mathematical structures and their properties, like fields, groups, rings, vector spaces, etc. Analysis is concerned with functions, surfaces and continuity. I like to say that in algebra, it’s difficult to identify what you’re studying and whether it’s worth studying it, but once you do there is a lot of machinery that’s relatively straightforward to prove. On the other hand, in analysis it’s easy to find things worth studying, but difficult to prove interesting things about them. For example, if you can prove that what you’re studying satisfies all the conditions of a <i>field</i>, you immediately can prove many other things about it. On the other hand, the toolbox of analysis is widely applicable to many things, but it often seems like you’re trying a hodge podge of techniques, and the proofs can look kind of magical at first. For a concrete example, try to prove that 1 + 2 + 3 + ... + <i>n</i> = <i>n</i>(<i>n</i> + 1)/2.<p>Now let’s take a tour of mathematics at the undergraduate level. In theoretical (but not necessarily pedagogical) order we have: set theory, calculus, analysis, topology and probability theory on the analytic side; and set theory, linear algebra and abstract algebra on the algebraic side. Analysis can be further subdivided into real analysis, complex analysis, functional analysis, harmonic analysis, Fourier analysis, as you move from foundational material to specialized material. Similarly abstract algebra divides into group theory, ring theory, finite fields, Galois theory, etc. Probability breaks down into discrete versus continuous random variables, measure theory, statistics (on the applied side), etc.<p>Here is my concrete advice regarding learning the material. First, internalize the idea that mathematics is “not a spectator sport.” You learn it by doing it, not just by reading it. Every time you’re sitting down with a textbook, attempt every exercise in good faith, and take an author’s lack of a proof as an invitation to prove it yourself. The first time you read a chapter, read it briskly, skipping over what you don’t know to get to the end of the chapter. Let that material percolate in your mind a bit, even though you won’t understand much of it. Then read the chapter again, but slowly. Write down every definition, theorem and proof. Try to prove each theorem yourself before reading the author’s proof. For anything unclear, search for different examples of that concept or for different proofs of that theorem. Then attempt at least half of the exercises at the end of the chapter. You will struggle <i>a lot</i>, and you will be demotivated <i>a lot.</i> It will feel frustrating and you will be humbled continually. But I can promise you that if you keep challenging yourself this way you will continue to improve. It’s not enough to find the right textbooks or the right resources, you need to study them the right way - in an active, focused way.<p>That brings me to my second piece of advice. There are many good books and resources for any given topic. Different people respond more favorably to different types of exposition. Sometimes you’ll receive a book suggestion and realize it’s not for you - that’s fine! It might still be a good book. For example, I rather like Rudin’s <i>Principles of Mathematical Analysis</i>, but please don’t try to learn from it without a teacher! For any given topic, find four or five strong suggestions, preferably all at your level of capability at the time. Then read the preface and the first 10 pages of the first chapter in each book. Look at the table of contents to understand not only the coverage of topics, but the pedagogical <i>arrangement</i> of topics. Proceed with the book you have the strongest affinity for, and use other books when the author is unclear.<p>Finally, <i>now</i> I will give you textbook suggestions:<p>1. Set Theory: <i>Naive Set Theory</i>, Halmos.<p>2. Calculus: <i>Single Variable Calculus</i>, Stewart; <i>Multivariable Calculus</i>, Stewart; <i>Calculus</i>, Spivak.<p>3. Linear Algebra: <i>Linear Algebra and Its Applications</i>, Strang; <i>Linear Algebra Done Right</i>, Axler; <i>Linear Algebra</i>, Hoffman & Kunz; <i>Finite Dimensional Vector Spaces</i>, Halmos.<p>4. Analysis: <i>Analysis I</i>, Tao; <i>Analysis II</i>, Tao; <i>Understanding Analysis</i>, Abbott; <i>Principles of Mathematical Analysis</i>, Rudin.<p>5. Abstract Algebra: <i>A Book of Abstract Algebra</i>, Pinter; <i>Abstract Algebra</i>, Dummit & Foote; <i>Algebra</i>, Artin; <i>Algebra</i>, Hungerford; <i>Algebra</i>, MacLane & Birkhoff; <i>Algebra: Chapter 0</i>, Aluffi.<p>Start with that, and once you've gained sufficient mathematical maturity look for more targeted and specialized resources. I also recommend that you read <i>Concrete Mathematics</i> by Graham, Knuth, Patashnik; and <i>Mathematics: Its Content, Methods and Meaning</i> by Kolmogorov, Aleksandrov and Lavrent'ev. These two, especially the latter, are good for covering a variety of mathematics at once. They are good for both learning and mathematical "culture."<p>I can't stress this enough: it's important that you really optimize the way you're studying and what your goals are, instead of trying to collect as many book recommendations as possible. |
How Your Returns Are Used Against You at Best Buy, Other Retailers | How Your Returns Are Used Against You at Best Buy, Other Retailers<p>Best Buy, other chains pay to track customers’ shopping behavior and limit items they can bring back<p>(Picture of a best buy store with a car parked out front, and a man pushing a cart with a TV towards the entrance)<p>At Best Buy, returning too many items within a short time can hurt a person’s score, as can returning high-theft items such as digital cameras. Photo: Craig Matthews/The Press of Atlantic City/Associated Press<p>By Khadeeja Safdar
March 13, 2018 5:30 a.m. ET<p>Every time shoppers return purchases to Best Buy Co., they are tracked by a company that has the power to override the store’s touted policy and refuse to refund their money.<p>That is because the electronics giant is one of several chains that have hired a service called Retail Equation to score customers’ shopping behavior and impose limits on the amount of merchandise they can return.<p>Jake Zakhar recently returned three cellphone cases at a Best Buy store in Mission Viejo, Calif., and a salesperson told him he would be banned from making returns and exchanges for a year. The 41-year-old real-estate agent had bought cases in extra colors as gifts for his sons and assumed he could bring back the unused ones within the 15 days stated in the return policy as long as he had a receipt.<p>The salesperson told him to contact Retail Equation, based in Irvine, Calif., to request his “return activity report,” a history of his return transactions. The report showed only three items—the cellphone cases—totaling $87.43. He asked the firm to lift the ban, but it declined. When he appealed to Best Buy and tweeted his report, the company referred him back to Retail Equation.<p>“I’m being made to feel like I committed a crime,” said Mr. Zakhar. “When you say habitual returner, I’m thinking 27 videogames and 14 TVs.”<p>Stores have long used generous return guidelines to lure more customers, but such policies also invite abuse. Retailers estimate 11% of their sales are returned, and of those, 11% are likely fraudulent returns, according to a 2017 survey of 63 retailers by the National Retail Federation. Return fraud or abuse occurs when customers exploit the return process, such as requesting a refund for items they have used, stolen or bought somewhere else.<p>Amazon.com Inc. and other online players that have made it easy to return items have changed consumer expectations, adding pressure on brick-and-mortar chains. L.L. Bean Inc., which once allowed customers to make returns even years after they purchased items, recently clamped down, citing abuse.<p>Some retailers monitor return fraud in-house, but Best Buy and others pay Retail Equation to track and score each customer’s return behavior for both in-store and online purchases. The service also works with Home Depot Inc., J.C. Penney Co. , Sephora and Victoria’s Secret. Some retailers use the system only to assess returns made without a receipt.<p>Best Buy uses Retail Equation to assess all returns, even those made with a receipt. Dozens of shoppers have complained on Twitter, Facebook, Yelp and other online forums that they were prevented from making returns despite following the store’s policy.<p>Retail Equation said its services are used in 34,000 stores, but declined to provide a full list of its clients. The Wall Street Journal learned of the relationship between some retailers and the firm by reviewing return activity reports from customers.<p>“We are hired by the retailers to review the returns, look for suspicious situations and issue approvals, warnings or denials,” said Tom Rittman, a marketing vice president at Appriss Inc., a Louisville, Ky., data analytics firm that acquired Retail Equation in 2015.<p>The company said its system is designed to identify 1% of shoppers whose behaviors mimic return fraud or abuse. Its statisticians and programmers have built a customized algorithm for each retailer that scores customers based on their shopping behavior and then flags people who exceed a certain score. The company said it doesn’t share a person’s data from one retailer with another.<p>“You could do things that are inside the posted rules, but if you are violating the intent of the rules, like every item you’re purchasing you’re using and then returning, then at a certain point in time you become not a profitable customer for that retailer,” said Mr. Rittman.<p>At Best Buy, returning too many items within a short time can hurt a person’s score, as can returning high-theft items such as digital cameras. After the Journal contacted Best Buy, the company said it created a dedicated hotline (1-866-764-6979) to help customers who think they were wrongfully banned from making returns.<p>“On very rare occasions—less than one tenth of one percent of returns—we stop what we believe is a fraudulent return,” said Jeff Haydock, a spokesman for Best Buy. “Fraud is a real problem in retail, but if our systems aren’t as good as they can be, we apologize to anyone inappropriately affected.”<p>Best Buy CEO Hubert Joly said the company is “looking very seriously at the process and partner around this.”<p>When a consumer makes a return, details about his or her identity and shopping visit are transmitted to Retail Equation, which then generates a “risk score.” If the score exceeds the threshold specific to the retailer, a salesperson informs the consumer that future returns will be denied and then directs them to Retail Equation to request a return activity report or file a dispute.<p>It isn’t easy for shoppers to learn their standing before receiving a warning. Retailers typically don’t publicize their relationship with Retail Equation. And even if a customer tracks down his or her return report, it doesn’t include purchase history or other information used to generate a score. The report also doesn’t disclose the actual score or the thresholds for getting barred.<p>Dave Payne, a 38-year-old public relations professional, said he learned of the system for the first time when he received a warning at a Best Buy in Orlando, Fla. He was returning a digital scale and a router extender, with a receipt for both items.<p>He said neither Best Buy nor Retail Equation provided a clear explanation for what he did wrong: “Best Buy advertises a 15-day return policy, but they are not advertising that at some point when you’ve crossed an arbitrary line, that policy no longer applies.”<p>The ban on his account was lifted after he complained to the company’s public-relations department, but he remains upset that his information is being shared with a third party. “It creeps me out.”<p>Write to Khadeeja Safdar at [email protected] |
Inheritance Often Doesn't Make Sense | People in this thread are trying to separate the concept of mutation from the concept of inheritance, but the problem with this is that you can't separate the two. Consider the following pseudocode:<p><pre><code> Rectangle r = new Rectangle(height = 3, width = 5);
Square s_as_s = new Square(side = 4);
Rectangle s_as_r = s_as_s;
print(r.height); // prints 3
print(r.width); // prints 5
print(s_as_s.height); // prints 4
print(s_as_s.width); // prints 4
print(s_as_r.height); // prints 4
print(s_as_r.width); // prints 4
print(r is_a? Square); // prints false
print(s_as_s is_a? Square); // prints true
print(s_as_r is_a? Square); // prints true
</code></pre>
Okay, so the question comes up when you mutate these results:<p><pre><code> r.height = 2;
s_as_r.height = 2;
</code></pre>
Keep in mind, this is a perfectly reasonable thing to do in both cases: you're just making two rectangles a little shorter. But no matter how you handle this situation, the results are surprising:<p>One way:<p><pre><code> print(r.height); // prints 2
print(r.width); // prints 5
print(s_as_s.height); // prints 2
print(s_as_s.width); // prints 2
print(s_as_r.height); // prints 2
print(s_as_r.width); // prints 2
print(r is_a? Square); // prints false
print(s_as_s is_a? Square); // prints true
print(s_as_r is_a? Square); // prints true
</code></pre>
This is the simplest to implement, but only because there's a part of the contract of Rectangle which is <i>implied</i> and not enforced by the compiler. When we change the height of a rectangle, we don't expect the width to change. This is the sort of gotcha that needs to be put in the documentation in big red letters: "WARNING: CHANGING THE HEIGHT MAY CHANGE THE WIDTH IN SOME SITUATIONS."<p>Another way:<p><pre><code> print(r.height); // prints 2
print(r.width); // prints 5
print(s_as_s.height); // prints 2
print(s_as_s.width); // prints 4
print(s_as_r.height); // prints 2
print(s_as_r.width); // prints 4
print(r is_a? Square); // prints false
print(s_as_s is_a? Square); // prints true
print(s_as_r is_a? Square); // prints true
</code></pre>
But now you've broken the contract of Square: the user is going to be very surprised when changing the side of one side of a Square means that the Square instance no longer represents a square.<p>Okay, what about this:<p><pre><code> print(r.height); // prints 2
print(r.width); // prints 5
print(s_as_s.height); // prints 2
print(s_as_s.width); // prints 4
print(s_as_r.height); // prints 2
print(s_as_r.width); // prints 4
print(r is_a? Square); // prints false
print(s_as_s is_a? Square); // prints false
print(s_as_r is_a? Square); // prints false
</code></pre>
This might be possible with some horrible hack in a language that does dynamic typing. This maintains the contracts of all the types, but I'd argue that it breaks the contract of the language itself: it's deeply confusing to have the type of the s_as_s variable change out from under you.<p>Perhaps you could do this:<p><pre><code> print(r.height); // prints 2
print(r.width); // prints 5
print(s_as_s.height); // prints 2
print(s_as_s.width); // prints 2
print(s_as_r.height); // prints 2
print(s_as_r.width); // prints 4
print(r is_a? Square); // prints false
print(s_as_s is_a? Square); // prints true
print(s_as_r is_a? Square); // prints false
</code></pre>
Setting aside how one might even implement this, we've now got the surprising result that s_as_s and s_as_r seem to be different instances now.<p>Maybe we should have prevent this in the first place:<p><pre><code> r.height = 2; // works
s_as_s.height = 2; // throws CannotSetSideViaHeightException
s_as_r.height = 2; // throws CannotSetSideViaHeightException
</code></pre>
s_as_s.height = 2 throwing an exception sort of makes sense, but now s_as_r isn't behaving like a rectangle--we're breaking the Rectangle contract again.<p>Another way to prevent it:<p><pre><code> r.height = 2; // works
s_as_s.height = 2; // works
s_as_r.height = 2; // throws ThisWouldBeConfusingException
</code></pre>
Again you're breaking the contract of the language, that s_as_s and s_as_r seem to be different objects.<p>There's only one way left I can think of to make Rectangle maintain its contract, Square maintain its contract, and keep s_as_s and s_as_r behaving the same way:<p><pre><code> r.height = 2; // throws MutationException
s_as_s.height = 2; // throws MutationException
s_as_r.height = 2; // throws MutationException
</code></pre>
Does this look familiar? It should: it's basically immutability implemented as checks at run time, which is not a good way to implement it. At this point we should just implement these as immutable objects.<p>I'm not necessarily saying that immutability is the only way. You could also give up inheritance:<p><pre><code> Rectangle makeSquare(int side) {
return new Rectangle(side, side);
}
Rectangle r = new Rectangle(3, 5);
Rectangle s_as_r = makeSquare(4);
print(r.isSquare); // prints false
print(s_as_r.isSquare); // prints true
r.height = 2;
s_as_r.height = 2;
print(r.height); // prints 2
print(r.width); // prints 5
print(s_as_r.height); // prints 2
print(s_as_r.width); // prints 4
print(r.isSquare); // prints false
print(s_as_r.isSquare); // prints true
</code></pre>
This also results in unsurprising behavior. |
One Order of Operations for Starting a Startup | Hey Michael, nice article. I saw in a comment you said students frequently ask you "how do I come up with a good startup idea?" and I know in the article you also included "This by no means the only path to an MVP...but it is a path that I’ve seen work for a number of YC companies."<p>I wanted to share something tangential that has worked for me as I also speak to students regularly and get the exact same question every time. My response includes the standard "Is there a problem you are passionate about?". But I've also observed how many students don't see problems as "problems" when asked to brainstorm about it and many would-be entrepreneurs end up left out of the process and discussion. I'm not sure all the reasons why but maybe a pain point isn't significant enough for them to label it a "problem" in their head when asked ("oh that's just an inconvenience not a real problem"). Or maybe their personality leans toward accepting status quo without realizing it can be changed and they can be the ones to change it. Or maybe they are too shy or introverted at first to describe and complain of a problem out loud.<p>So I've modified my response to students a bit to include: I have seen startup ideas come from 1. An idea for a solution to a problem and 2. An idea for something "that would be awesome" if someone created it but no one has yet or at least not done a good job of it. Here's what I mean by "that would be awesome" ideas, to take the example of Justin.tv, it leads to the same startup being created but 2 different types of entrepreneurs might get there by thinking:<p>1. <i>What's a problem you experience and want to solve</i>: "TV used to be entertaining but TV shows now feel stale, boring and with writing that feels so formulaic."<p>2. <i>What is something "that would be awesome" if someone created it</i>: "I love TV and movies, The Truman Show is one of my favorite movies, <i>it would be awesome</i> if someone made a TV show for based on following someone's life similar to The Truman Show but for real and with real people not actors".<p>Same startup. Two ways to get there. Hopefully this all makes sense. I've had good results engaging with students by adding the "that would be awesome if that existed" to the ways to think of startup ideas. This especially true with students who are shy or less likely to complain about problems out loud for some reason as well as for students who know how lucky they are to be at a fancy university in a first world country and they feel guilty describing anything as a problem worth solving if it isn't on the level of world hunger or similar (in which case I encourage those students to go solve world hunger if that's what they want to do).<p>Also one more thing. In my opinion the real major problem for aspiring entrepreneurs, bigger than coming up with a problem to solve, is how to brainstorm!<p>Most people assume brainstorming is just "sit and think". And it can be that and some people are very good at that (and other people aren't good at sit and think brainstorming at all but they just get lucky and a good idea pops in their head one day). But there's a lot of literature out there with research on <i>effective brainstorming</i> as well as tips and tricks to get one's brain into a better brainstorm mode (hint there's a reason why so many people say their best ideas come while showering or jogging). I am outside the valley so your mileage may vary over there but I see <i>guided</i> brainstorming sessions work much better than the usual casual 'shoot the shit type brainstorming' for the majority of people as brainstorming doesn't come naturally to them. Hence everyone has 99 problems and no good ideas.<p>The most popular thing I do when talking to students is a class discussion to come up with a problem and get the entire group of students brainstorming and iterating solution ideas together, with the goal being "find the first 5 feasible but bad solutions" (Asking them to come up with a "good solution" is too much pressure and then no one wants to raise their hand to say an idea others in the group might think is bad, so it helps to make clear it's OK we don't need good ideas at this point). Another thing that helps is I hand out index cards to start things and get students to anonymously write down one "problem to solve" idea or one "That Would Be Awesome" idea. I collect the cards and pick the idea that best facilitates brainstorming and discussion to start things off and then do guided brainstorming together. Most students have never been part of a real brainstorming session beyond some brainstorming how to do a group school assignment that gets an A and isn't too much work for everyone and that really they don't care about. Real brainstorming is hard and they have never done it for real in a group setting.<p>Sorry for the long comment. Just wanted to share something that has worked for me getting more students more involved in entrepreneurship, especially for the types of students who currently are under-represented among the population of founders these days for whatever reasons. Yes. Also shout out to my fellow shy introvert founders and anyone who doesn't walk around all day thinking about problems or like complaining about them. It's OK to start a company with the goal to make something awesome a reality, everyone has 99 problems and zero good ideas, try thinking up something awesome instead. |
A Q&A with Mark Zuckerberg About Data Privacy | Here's a piece of speculation.
If you have no time for these harsh words, I'm sure you have your mouse at the ready. I am sorry you cannot crumple the page and toss this into the wastebasket if you'd like. Anything except more hand wringing on the internet. Without further ado...<p>The US propaganda machine has been telling everyone who will listen that "the Russians influenced the election" endlessly for the past year.
Are there hard statistics on this? Is there a methodology, mechanism, or theory describing how effective this manipulation has been, and exactly how we get from a survey to a Trump? I am asking honestly. Because please, please, let's focus on what is quantifiable, if we focus on anything there.<p>Lose talk, anecdotes, and accusations are not sufficient, nor even necessary, if there are hard statistics on the mechanisms of manipulation.
Let's talk about those, then, and cease this glib gab and mindless anger at what we already know is a pretty shitty business. (Facebook)<p>My guess is that there are no hard facts. If Facebook didn't want regulation, they might argue forcefully this way - that the troll C.A. did not effectively do much of anything to influence the election.
My guess is that Facebook wants to be regulated. It will gain cultural validity, put up a barrier to entry against competitors,
and cement for the history books this story that "unregulated social media"/"the unregulated internet" "used to allow bad actors to
influence our democratic process."<p>From then on we will have one social network, Facebook, and everyone will be taught from high school to law school, that such regulations
are necessary to protect our free society from undo influence. Few will question this. The machine will grow. It feeds on ignorance.<p>(What I really think is that even if C.A. did effect the election, we still should not allow Facebook regulatory capture as incumbent. It is worth the cost of manipulation to have the possibility of Facebook being wiped out in the future. - And above all, to avoid having it enshrined and regulated at the same time as a valid, trusted news source. Then it will really reach its full "potential" ... as a propaganda piece for the US rich and powerful.)<p>Back to the present:<p>This Cambridge Analytica (C.A.) story breaks, and almost nobody is talking about whether or not it had any measurable effect on the election at all. Wired ran a companion piece that I can no longer find - a sideline story, stating that "it" (C.A.'s actions) basically had no effect on the election, but this story was not marketed or highlighted. What was marketed was this "big story" that C.A. took user data. The fact that it did nothing of consequence (that I have seen) is a tiny link at the bottom. I can no longer find it in the barrage of stories talking about what happened in broad, anecdotal, scared up, strokes.
(And again, even if C.A. did make an impact, Facebook regulatory capture and validity enshrinement is the one of the worst outcomes imaginable. )<p>These are war drums we are hearing. And the marching orders are against freedom of information on the internet generally. Facebook will come out regulated, "made safer" (and it will in turn become a better shill for the powers that be) but the real prize is that the rest of the internet is supposed to be regulated too, and the regulators will have unfettered access to whatever they want. The NSA has nothing like what it will have once the hand wringers make the internet safe for democracy.<p>That is the game plan.
New articles on "bad facebook" "bad Russia" and "bad cambridge analytica" (the last, just next week's scapegoat once facebook is whitewashed and regulated) is all theater in order to get us there.<p>I recommend everyone stop talking about, and stop worrying about, this stupid sideshow and pay attention to what they are trying to really do to us.<p>To be clear, I can't really think less of Facebook. It's a social hack which sweeps up data.... for entities that find it useful... But I'd expect it's use in manipulating elections to go up, with increasing regulation (increase in perceived validity).
The biggest manipulator is and always will be our political-media machine. Facebook is just going through the process of being publicly inducted.<p>Facebook is going to come out stronger from this. The narrative will say "after fire and brimstone, years in doubt, Facebook goes through the trial of its life, and comes out smarter, nicer, and above all, more regulated, for the benefit of us all."<p>In practice, society will have Facebook as the permanent social media app, and others basically banned as dangerous.
If that is what you want, then keep up the hand wringing over Cambridge Analytica.<p>This sort of thing went stale for me a long time ago. If you cry out "fix it" "fix it" to the government, they will fix it alright. Fixed. As in permanent, unchangeable, and all powerful. Facebook. The only safe social news platform. Good luck to you. |
Ask HN: What is the point of behavioral interview questions? | Oh boy, am I triggered! I recently spent some time looking for a new job and went through a number of terrible interviews that involved psychological behavioral assessment tests.<p>I don't think it was a coincidence that all of the really bad interviews/companies I've had also made extensive use of psychological evaluation/behavioral analysis questions. These things seem to go hand-in-hand.<p>To directly answer your question, I think the real intent behind behavioral questions is that there wasn't any intent, or much thought. People who ask these kinds of questions often ask them because they just googled for things like "how to conduct an interview", and "questions to ask during an interview". They found websites with examples of terrible psychological evaluation questions, but they don't really know why they are asking them or what they were supposed to get out the question.<p>What your are seeing here is monkey mimicry. Look at all the companies and hiring managers that have been mimicking what they <i>think</i> is great advice they got from how Big-Company-X (google/facebook/stanford/et-cetera) about how to hire great people. Remember all those stupid brain teaser tests?<p>In the few cases where some train of intelligent thought lead to someone asking these silly questions, they usually just want to push you a little to find out if they can make you snap easily, or get frustrated. This is an important and desirable trait in abusive workplaces because the employer doesn't want someone who is going to raise their hand and say, "excuse me, that's not right". Of course, it's important in good workplaces too, because they don't want someone who is overtly aggressive or displays an inability to be flexible in stressful situations, or gets easily flustered.<p>Think of the scene early on in the movie Blade Runner. (But try not to think about the fact that the position being interviewed for was a waste disposal job). Don't pull a Leon Kowalski in the interview.<p>The hiring process is a sieve; a gateway filter. What kinds of people are being let through, and what kinds of people are being filtered out by this style of hiring process? Paying attention to this will often gift you an awful lot of insight into how the company operates and what kinds of people you would potentially be working with.<p>There are other, more clever, ways of figuring out the behavioral profile of a potential worker without subjecting them to an abusive and insulting interrogatory, but they almost always take more time and effort, and really, who has time for that?<p>The most significant problem I have with psycho analysis questions is that if the interviewer is not qualified to interpret the results of their experiment, then that's a serious problem. And, how many of the HR goons you've met do you think have a degree in psychology, or have been training in interpreting the results of a psychological evaluation? None at all ever? Yea me too.<p>It's not entirely their fault. Hiring people is really hard! Even if you do put the candidate through a series of exhaustive tests and have qualified staff to administer and evaluate those tests, the result is still that sometimes it just doesn't work out, or the candidate ends up leaving for a better job after a few months.<p>For me, I turn the whole situation around. Behavioral analysis questions are a valuable source of information about the company. They are usually a display of mediocrity and thoughtlessness (but not always!), and those are usually not the kinds of organizations I want to work in.<p>If an interviewer asks me "Tell me about a time you got into an argument with you manager.", I like to turn it around at them and watch how they react: "I don't make it a point to get into arguments with my co-workers. What happens at work is business, and I don't let it get personal. But it really troubles me that you are so terribly concerned about conflicts in your workplace. What is going on in your organization that is promoting this kind of behavior between staff?" "What kind of manager promotes conflicts and argument rather than engaging in de-escalation and consensus-building, and is that the kind of management I should expect to be working with in your organization?"<p>The above is written a little aggressively(it's all about the in-person delivery), but doing this has it been super effective for me. People don't realize that they are projecting the toxicity and problems of their workplace in their questions.<p>At the very least, when it's all over, ask the interviewer why they were asking you these questions, and what they expected to get out of it. If they don't have an answer or seem offended that you would dare ask, then you've got your answer: RUN AWAY.<p>Remember that behavioral assessments are evaluating:
>Communication/verbal skills
>Listening ability
>Attitude
>Ability to play the submit-to-my-authority game<p>Good luck in your next interview! |
I usually run 'w' first when troubleshooting unknown machines | On FreeBSD systems there are two great commands that are not available on other systems.<p>These are 'gstat' and 'top -m io' commands, both I/O related.<p>Often in top/htop there is not much happening but the server is very slow, a lot of the times its because of I/O operations.<p>The 'gstat' command will tell You (besides other useful statistics) how much the storage devices are loaded:<p><pre><code> # gstat
dT: 1.001s w: 1.000s
L(q) ops/s r/s kBps ms/r w/s kBps ms/w %busy Name
1 9679 9614 4807 0.1 63 671 0.2 82.3| ada0
0 0 0 0 0.0 0 0 0.0 0.0| ada0p1
0 65 0 0 0.0 63 671 0.2 1.4| ada0p2
0 0 0 0 0.0 0 0 0.0 0.0| ada0p3
0 0 0 0 0.0 0 0 0.0 0.0| gpt/boot
0 65 0 0 0.0 63 671 0.2 1.5| gpt/sys
0 0 0 0 0.0 0 0 0.0 0.0| gpt/local
0 0 0 0 0.0 0 0 0.0 0.0| zvol/local/SWAP
</code></pre>
The 'top -m io' show which processes does how much I/O:<p><pre><code> # top -m io -o total 10
last pid: 51154; load averages: 0.31, 0.31, 0.28 up 3+18:01:00 14:58:15
54 processes: 1 running, 53 sleeping
CPU: 2.4% user, 0.0% nice, 15.3% system, 5.3% interrupt, 77.1% idle
Mem: 345M Active, 1236M Inact, 153M Laundry, 2158M Wired, 3903M Free
ARC: 834M Total, 46M MFU, 295M MRU, 160K Anon, 5006K Header, 488M Other
67M Compressed, 274M Uncompressed, 4.08:1 Ratio
Swap: 4096M Total, 4096M Free
PID USERNAME VCSW IVCSW READ WRITE FAULT TOTAL PERCENT COMMAND
51021 vermaden 6005 16 6005 0 0 6005 99.92% dd
51154 vermaden 8 9 5 0 0 5 0.08% top
51036 vermaden 6 10 0 0 0 0 0.00% xterm
50907 vermaden 0 0 0 0 0 0 0.00% zsh
50905 vermaden 0 0 0 0 0 0 0.00% xterm
50815 vermaden 0 0 0 0 0 0 0.00% zsh
50813 vermaden 0 0 0 0 0 0 0.00% xterm
50755 vermaden 0 0 0 0 0 0 0.00% tail
41780 vermaden 0 0 0 0 0 0 0.00% leafpad
41255 vermaden 29 11 0 0 0 0 0.00% firefox
</code></pre>
Another command that is very useful on FreeBSD is 'vmstat -i' which show how much interrupts are happening:<p><pre><code> # vmstat -i
interrupt total rate
irq1: atkbd0 75135 0
irq9: acpi0 2575929 8
irq12: psm0 135060 0
irq16: ehci0 1532065 5
irq23: ehci1 3265677 10
cpu0:timer 102772345 317
cpu1:timer 94199942 291
irq264: vgapci0 370466 1
irq265: em0 7017 0
irq266: hdac0 1904427 6
irq268: iwn0 148690342 459
irq270: sdhci_pci0 147 0
irq272: ahci0 20875039 64
Total 376403591 1161
</code></pre>
I always suffer when I have to debug Linux systems because of lack of these commands.<p>Regards,
vermaden |
Why “blockchain” is BS in 4 slides | Since we started intercoin.org I thought I would go through point by point and see how it applies to our project. It’s true that his points apply to many projects.<p>1) Two conversion steps - since Intercoin is designed to be used as actual currency and not just a store of value, people can pay each other anytime with zero fees. Given enough adoption, people stop cashing out (think PayPal, Venmo etc.) and just pay each other in that economy. They do this to save fees and time.<p>2) Entity to convert currency: Actually this entity can be a simple market maker on an exchange. Within its economy, Intercoin has sidechains for each community and deterministic pricing, with no market makers. And you don’t have to worry about eg PayPal or Cyprus banks freezing your money.<p>3) A “private blockchain” requires far more than that if it is to be used for crypto-currency. The main guarantee is that there are no forks of the log, aka double-spends. Intercoin lets every community run their own distributed ledger, so you don’t need to search the whole world for double-spends. That makes it so efficient you can even do micropayments (Netflix, Basic Attention Token etc.)<p>4) Proof of Work/Stake/blah. Intercoin letting communities run their own ledger the way Wordpress lets them run a blog, it comes with its own set of challenges as small communities can have very few computers. It has to be secure like your end-to-end encrypted email when you get on someone’s wifi. So we can’t use the traditional stuff. <a href="https://intercoin.org/technology.pdf" rel="nofollow">https://intercoin.org/technology.pdf</a><p>5) Lottery-based systems create mining pools: yep and in fact any kind of proof of stake creates centralization, while any kind of proof of work creates an arms race that leads to centalization. Intercoin takes inspiration more from XRP consensus protocol and SAFE network design.<p>6) “Code developers can and do act like central authorities.” If this refers to issuing the Unique Node List like Ripple, or reverting transactions like ETH etc. then that’s not good. If this means putting out a new client of server software and having decentralized adoption then that’s inevitable. Otherwise you get fragmentation like Linux. And even then, there are only a few major Linux distributions. I happen to prefer <i>collaboration</i> on a centralized codebase in this case, but decentralization in everything else. (eg I would be totally OK with WebKit being overseen by the W3C consortium and have new ideas begin as extensions that are finally adopted into the main codebase - think of all the wasted web developer man hours since multiple browsers launched).<p>7) Protection limited to money wasted - not sure what that means. A person knows their money can’t be stolen. In Intercoin our priorities are A) the overall network must never be corrupted, B) no one can steal your money C) no one can freeze your money permanently, in that order.<p>8) Transactions vs Capacity. VERY good points here, and all global networks are susceptible to this, as are public facing websites (DDOS etc.) This is why Intercoin is designed to be like the original Internet, with each community able to run its own network and set its own policies. That allows a theoretically UNLIMITED number of transactions per second, not 7 or 1000. Usually each validator includes a free tier for the first X transactions per day, to known members of the network. Networks run their own computers and the consensus algorithm is much cheaper and doesn’t waste half the world’s electricity to work. The validators - being off the shelf computers run by random people — fund themselves through the currency. But they earn money for actually processing transactions, not a lottery.<p>9) Bad economics - yes this is rampant in cryptocurrency circles (eg people including Satoshi thought Bitcoin being deflationary “sound money” will make people want to spend it, when the opposite is true). Intercoin has among its advisors world-famous economists from diff schools like MMT (Walter Mosler) and Austrian School, Chicago school, precisely for this reason. We want to let communities issue their own currencies and implement UBI on a community level through entirely voluntary means. It brings together people on the left and right.<p>10) “All ICOs are securities being sold fraudulently” - Intercoin Inc. has raised money through exemptions with the SEC (Regulations D and S) and is now working on registering Intercoin tokens <i>with</i> the SEC as securities ahead of a public offering. Not everyone shirks the law. In fact, we consider tech to be only one of the services we provide for communities. The others are turnkey solutions for regulations (securities, money transmission) and taxes (501c3 for UBI donations, capital losses etc.) so communities can install their currencies as easily as Stripe Atlas lets you open a company.<p>I hope this addresses it point by point. |
DNS Performance compared: CloudFlare 1.1.1.1 x Google 8.8.8.8 x Quad9 x OpenDNS | Thanks, great response times from my NYC droplet.<p><pre><code> test1 test2 test3 test4 test5 test6 test7 test8 test9 test10 Average
quad9 1 ms 2 ms 1 ms 1 ms 1 ms 1 ms 1 ms 1 ms 1 ms 1 ms 1.10
cloudflare 2 ms 1 ms 1 ms 2 ms 1 ms 1 ms 1 ms 2 ms 1 ms 1 ms 1.30
comodo 1 ms 2 ms 2 ms 3 ms 2 ms 1 ms 2 ms 1 ms 1 ms 2 ms 1.70
adguard 2 ms 2 ms 3 ms 2 ms 2 ms 2 ms 2 ms 2 ms 2 ms 2 ms 2.10
cleanbrowsing 2 ms 4 ms 2 ms 2 ms 2 ms 2 ms 14 ms 16 ms 2 ms 2 ms 4.80
norton 6 ms 7 ms 7 ms 7 ms 8 ms 7 ms 6 ms 7 ms 7 ms 7 ms 6.90
namecheap 7 ms 7 ms 7 ms 7 ms 7 ms 7 ms 7 ms 7 ms 7 ms 7 ms 7.00
neustar 8 ms 7 ms 7 ms 8 ms 9 ms 6 ms 7 ms 7 ms 7 ms 7 ms 7.30
namecheap2nd 8 ms 8 ms 7 ms 9 ms 9 ms 8 ms 10 ms 8 ms 8 ms 8 ms 8.30
opendns 20 ms 1 ms 1 ms 30 ms 2 ms 8 ms 1 ms 16 ms 15 ms 3 ms 9.70
google2nd 16 ms 1 ms 1 ms 17 ms 1 ms 24 ms 1 ms 16 ms 17 ms 14 ms 10.80
google 17 ms 1 ms 1 ms 17 ms 1 ms 41 ms 1 ms 17 ms 18 ms 15 ms 12.90
cloudflare2nd 1 ms 2 ms 1 ms 1 ms 1000 ms 2 ms 2 ms 1 ms 2 ms 2 ms 101.40
yandex 101 ms 102 ms 104 ms 101 ms 115 ms 103 ms 107 ms 100 ms 105 ms 136 ms 107.40
</code></pre>
Not so much from my home ISP:<p><pre><code> test1 test2 test3 test4 test5 test6 test7 test8 test9 test10 Average
namecheap2nd 45 ms 45 ms 44 ms 45 ms 48 ms 45 ms 45 ms 46 ms 48 ms 45 ms 45.60
cloudflare2nd 45 ms 49 ms 48 ms 47 ms 45 ms 44 ms 45 ms 45 ms 46 ms 46 ms 46.00
namecheap 46 ms 48 ms 48 ms 44 ms 45 ms 45 ms 46 ms 45 ms 45 ms 48 ms 46.00
cleanbrowsing 46 ms 46 ms 44 ms 56 ms 45 ms 44 ms 48 ms 46 ms 44 ms 46 ms 46.50
google2nd 49 ms 47 ms 47 ms 45 ms 51 ms 47 ms 46 ms 44 ms 43 ms 46 ms 46.50
comodo 46 ms 47 ms 48 ms 49 ms 46 ms 47 ms 44 ms 45 ms 47 ms 50 ms 46.90
adguard 49 ms 48 ms 45 ms 46 ms 46 ms 48 ms 49 ms 48 ms 48 ms 48 ms 47.50
google 46 ms 49 ms 47 ms 47 ms 45 ms 47 ms 47 ms 49 ms 44 ms 67 ms 48.80
opendns 47 ms 46 ms 47 ms 64 ms 48 ms 49 ms 46 ms 64 ms 64 ms 48 ms 52.30
cloudflare 44 ms 48 ms 45 ms 50 ms 48 ms 110 ms 45 ms 48 ms 45 ms 47 ms 53.00
quad9 46 ms 49 ms 45 ms 47 ms 49 ms 153 ms 46 ms 45 ms 48 ms 46 ms 57.40
neustar 66 ms 66 ms 66 ms 67 ms 66 ms 66 ms 66 ms 67 ms 66 ms 67 ms 66.30
norton 91 ms 67 ms 67 ms 67 ms 66 ms 66 ms 67 ms 66 ms 67 ms 67 ms 69.10
yandex 176 ms 279 ms 176 ms 174 ms 188 ms 178 ms 179 ms 176 ms 174 ms 179 ms 187.90</code></pre> |
Rules of Engagement: How Cities Are Courting Amazon’s New Headquarters | Don’t get too fancy. Head to edgier, trendier, neighborhoods. And definitely stay on time.<p>These are a few of the tricks cities are deploying in their all-out effort to woo Amazon.com Inc.’s new headquarters, a move the online retailer says could bring nearly 50,000 jobs and more than $5 billion in investment over nearly two decades. Amazon executives have quietly visited more than half of the cities on its list of 20 finalists, which are vying to host what it calls HQ2, according to people familiar with the matter. The visits, which started in recent weeks, have included Dallas, Chicago, Indianapolis and the metropolitan Washington, D.C., area.<p>Amazon hasn’t provided much guidance on what it expects from the site visits. It has asked for breakout sessions on education and talent, plus visits to the sites it is considering, all within a strict time frame of two days, max. The rest is up to local officials.<p>As a result, officials of candidate cities are scrutinizing the company’s business practices to concoct the perfect 48-hour-or-less-visit that could win over Amazon.<p>Details of the visits are scarce. Amazon, which has said it plans to announce the winning suitor later this year, has advised HQ2 candidates to keep this phase private. And city officials are loath to spill any tricks that could give their rivals a leg up.<p>Still, people familiar with the visits paint a picture of whirlwind trips with Amazon’s small economic-development team. Led by Holly Sullivan, it dives into data provided by the cities—such as the ACT and SAT scores of local high-school students—and asks a lot of probing questions regarding how much talent Amazon can attract.<p>Municipal officials said that while Amazon has some of the quirky features of a hotshot technology company—it allows employees to bring thousands of dogs to its Seattle campus and gives out free bananas at two campus stands—it doesn’t offer many of the other trappings associated with working at a tech giant, such as free meals.<p>Amazon is “a frugal-ass company,” said one city official working to win the project. “They’re making a 100-year decision…. All of the extra fluffy stuff is fluff.”<p>So, many cities are nixing the fancy hotels, dinners at the governor’s mansion, private planes and small gifts—all typical core aspects of a traditional site visit, according to people familiar with the matter. “We were concerned that if we went over the top, it would push them away,” said a person involved in one site visit.<p>Instead, they are attempting to be creative by bringing in university officials, younger people and professionals who can speak to talent and growth in the area. Officials are adding visits to trendier neighborhoods to highlight the draw for younger employees. And cities have even ferried Amazon executives around by bicycle and boat as modes of transportation.<p>“Amazon is working with each HQ2 candidate city to dive deeper on their proposals and share additional information about the company’s plans,” an Amazon spokesman said in a written statement. The spokesman confirmed site visits were taking place.<p>A few of Amazon’s desires are crystallizing, some of the people said. The company appears to be leaning toward a more urban site, despite requesting proposals that included sites in the suburbs. It also wants to come to a city prepared to handle the company’s growth and the influx of high-paid employees. In its home base of Seattle, Amazon has faced criticism for contributing to traffic and higher housing costs.<p>“They believe there is no American city that can provide for all their needs,” said the person involved in a site visit. Another person said the company expects to have to compromise no matter which location it selects. Amazon has also placed particular emphasis on the tech talent pipeline: how much already exists locally, and how much Amazon can attract from around North America and the rest of the world.<p>Some cities are offering financial incentives. New Jersey and the city of Newark have offered a package valued at $7 billion, while Maryland—where Amazon is considering a location in Montgomery County—has put $5 billion on the table.<p>In the past, cities have become creative to attract or retain a coveted company. To try to keep JetBlue Airways Corp. based in New York, the city’s economic-development corporation did its research. Officials found out the company hosted regular softball tournaments, so they specifically pitched the airline on a location near a field. City and state officials also worked out a joint deal offering JetBlue co-branding rights to the “I (heart shape) NY” logo as well as the slogan, “JetBlue, New York’s Hometown Airline”—something that helped seal the deal.<p>Washington, D.C., might have a leg up, having already hosted Amazon Chief Executive Jeff Bezos for visits when he considered acquiring the Washington Post, which he now owns. Mr. Bezos also purchased the former Textile Museum in Washington’s Kalorama neighborhood for $23 million in 2016 and is currently turning it into a private residence. |
Ask HN: How to self-learn electronics? | The Context<p>Electronics is a complex topic but doable with the right strategy. Suppose instead of electronics I wanted to learn "how does my mind move my finger?". As a person with a mind and a finger the answer is quite simple: my mind creates the command and the finger moves. While this is true, under the covers though the actual details are a lot more complex. The good news is that electronics is a lot simpler than moving your finger. The trick is to learn the simpler ideas that allow you do actual things without getting lost in the details.<p>Another problem is that once someone understands an area of learning they tend to want to teach it from a top down perspective. In the case of electronics it's "well here's Maxwell's equations and it's all you really need". But people tend to learn better working from the bottom up. Understanding various simple things and building up the abstract structure as they go.<p>So here's my recommendation for a hierarchy of learning for electronics. Understand that at each step you're never getting "the complete truth" but also you're not getting actual lies.<p>First electronics is divided into to types of things. Passive components and active components. A resistor is a type of passive component and a transistor is a type of active component.<p>It's best to get solid foundation in passive components before moving on to active components even though "proper electronics" is about active components. That's because most of the time active components are actually thought of as a kind of combination of passive components.<p>So start with a battery and a resistor and the equation V=IR. While this seems way too simple it's actually the idea that used in a lot of electronics so it's good to understand it pretty well. And it's conceptually pretty clear.<p>Once you feel you have a good solid, unshakeable under standing of one resistor and a voltage supply move on to two resistors in series and then two resistors in parallel. Calculate the voltage and current for each one.<p>Next keep adding resistors in arbitrary combinations up to say a dozen resistors and become confident that you can calculate the voltage and current regardless of any combination that is given to you.<p>At this point it's very helpful to think of current in terms of water flow and voltage in terms of water pressure. (The resistor is analogous to the size of the pipe that the water is flowing through) This analogy, with some refinements, goes a long way in electronics so it's good to start thinking like this.<p>The next step is to take a look at a circuit of a resistor and a capacitor in series. This is the classic RC circuit that is used a lot. A capacitor can store energy and so its characteristics can be quite different than those of resistors. However, with a little mathematical sleight of hand capacitors can be treated "just as if they were resistors" in many circumstances and this makes calculations (and thought processes) quite a lot easier.<p>Learn the mathematical techniques of analyzing circuits made of resistors and capacitors driven by both an AC voltage and a DC voltage for some very simple circuits. One resistor and one capacitor is plenty for starting.<p>Note that these circuits have both a 'transient phase' and a 'steady state' phase. You can think of this in terms of picking a ball up from the floor and then dropping it. The time during which the ball is bouncing is the 'transient' stage and after the ball stops bouncing that's the 'steady state' phase. For the most part electronics concerns itself with the 'steady state' phase. However, a circuit driven by a steady AC (sinusoidal) voltage (or current) and be analyzed in a steady state manner even though the values are varying with time.<p>The other standard passive component is the inductor. Don't worry about it until you get quite confident in your understanding of capacitors since inductors are intuitively harder to figure out and mathematically both components are treated very similarly.<p>Active Components
While there are quite a few 'semiconductor' based active components simplify by studying the three main types first. These are diodes, bipolar transistors(npn, pnp) and field effect transistors (FET's). The theory of how these devices actually work is very complicated and not really worth effort. The diodes are quite simple to understand so start with those.<p>The transistors are trickier. They operate in two modes, non-liner and linear. Non-linear is messy and best left to later. The linear regime is where these are mostly used and is conceptually not too difficult. In fact they operate in a manner not much different than the knob that controls water flow (there's that metaphor again) in your shower. A transistor had three terminals one of the terminals is used to change the resistance value between the other two terminals.<p>In the case of bipolar transistors the controlling input is a current. With FET transistors the controlling input is a voltage.<p>And that's it really. There are more complicated things like phase locked loops and more niche type devices like SCR's but these are the basics.<p>Once you start to actually put circuits together do yourself the favor of learning to solder and wirewrap rather than using bread boards. For a tiny bit of extra effort you'll likely save yourself hours of frustration because your circuit connections will be much more reliable. |
Backpage.com seized by U.S. justice authorities | Interesting side aspect to the report is that Larkin and Lacey reside in Arizona and owned the New Times and Village Voice [1], they separated backpage content from those.<p>Both the New Times and Village Voice are advocates on issues of civil rights and they cause lots of problems for authoritarian politicians and organizations trying to control people's lives [2][3]. Lacey and Larkin Fund is a huge supporter/funder of groups like ACLU and immigrant groups who donate heavily to civil right causes[2][6].<p>Part of the attack on backpage is political in my opinion after looking at the facts.<p>> <i>Backpage started as the literal back page of the New Times, filled with classified ads.</i><p>> <i>Lacey and Larkin, former New Times executives who sold off the newspaper chain in 2012, retained the lucrative interest in the Backpage website.</i><p>Both Larkin and Lacey are big civil right advocates and donate heavily to civil rights causes, sex rights, gay rights and immigrant rights [2][3] and the New Times attack politicians for corruption on the regular. After they attacked Arpaio they had nearly a decade of attacks from him and associated groups [5]. They did a strange tactic attacking Larkin and Lacey going after New Times readers.<p>><i>In October 2007, Maricopa County sheriff's deputies arrested Lacey and Larkin on charges of revealing secret grand jury information concerning the investigations of the New Times's long-running feud with Maricopa County sheriff Joe Arpaio. In July 2004, the New Times published Arpaio's home address in the context of a story about his real estate dealings, which the County Attorney's office was investigating as a possible crime under Arizona state law. A special prosecutor served Village Voice Media with a subpoena ordering it to produce "all documents" related to the original real estate article, as well as "all Internet web site traffic information" to a number of articles that mentioned Arpaio.</i> [5]<p>Arpaio tried to get all information on all Phoenix New Times readers and the paper has been known to be tough on Arpaio overreaches in Arizona on immigrants and non-conservatives.<p>> <i>The prosecutor further ordered Village Voice Media to produce the IP addresses of all visitors to the Phoenix New Times website since January 1, 2004, as well as which websites those readers had been to prior to visiting. As an act of "civil disobedience", Lacey and Larkin published the contents of the subpoena on or about October 18, which resulted in their arrests the same day.On the following day, the county attorney dropped the case after declining to pursue charges against the two.</i> [5]<p>> <i>The special prosecutor's subpoena included a demand for the names of all people who had read the Arpaio story on the newspaper's website. It was the revealing of the subpoena information by the New Times which led to the arrests. Maricopa County Attorney Andrew Thomas dropped the charges less than 24 hours after the two were arrested</i> [5]<p>> <i>In the weeks following the arrests, members of the Association of Alternative Newsweeklies, of which the Phoenix New Times is a member, provided links on their websites to places where Arpaio's address could be found. This was done to show solidarity with the Phoenix New Times.</i>[5]<p>There is a strange section of the Backpage report attacking Lacey and Larkin for selling it to an outside investor who is offshore. They appear to hide ownership from them some might say wisely if it is being the whipping boy for 'trafficking' claims when really it is a sex worker ad website [4].<p>> <i>Third, despite the reported sale of Backpage to an undisclosed foreign company in 2014, the true beneficial owners of the company are James Larkin, Michael Lacey, and Carl Ferrer. Acting through a complex chain of domestic and
international shell companies, Lacey and Larkin lent Ferrer over $600 million to purchase Backpage from them. But as a result of this deal, Lacey and Larkin retain significant financial and operational control, hold almost complete debt equity in the company, and still receive large distributions of company profits. According to the consultant that structured the deal, moreover, this transaction appears to provide no tax benefits. Instead, it serves only to obscure Ferrer’s U.S.-based ownership and conceal Lacey and Larkin’s continued beneficial ownership.</i><p>I'd say part of the attack on backpage is political and shuts down a big political donor for civil rights and typically liberal aims [2]. It seems them selling off the New Times, Village Voice and moving ownership overseas from 2012-2014 irked some politicians and authoritarians.<p>The Backpage report claims of 'trafficking' amounts to simply a profanity filter that backpage created to PREVENT people posting bad ads or possible 'trafficking' not encouraging it [4].<p>The report says Backpage 'knowingly concealed evidence of criminality' for removing bad terms being posted due to a profanity/word filter? They didn't want people posting ads with these terms because they were bad words, not that they supported it[4].<p>That argument is like saying, because a site profanity filter removed racial slurs that the site was 'knowingly concealing evidence of racism'.<p>The whole idea was to block it not some devious hiding scheme. Understand backpage made lots of money, they wouldn't risk it for 'trafficking' and 'child trafficking' is such a reach that it just seems tacked on to the report with no evidence of it. Of course backpage doesn't want bad words posted to their ads, doesn't every classified site?<p>New Times, Village Voice and by association, Backpage, have all been targets in Arizona by Arpaio, McCain and authoritarian orgs mainly because they are tough on authoritarian politicians including Trump, it seems there was more to this attack than what is stated in the report.<p>The New Times has a history in being pro-civil rights and anti-war.<p>> <i>The Phoenix New Times is a free alternative weekly Phoenix, Arizona newspaper, published each Thursday. It was the founding publication of New Times Media (now Village Voice Media), but The Village Voice is now the flagship publication of that company.</i>[5]<p><i>The paper was founded in 1970 by a group of students at Arizona State University, led by Frank Fiore, Karen Lofgren, Michael Lacey, Bruce Stasium, Nick Stupey, Gayle Pyfrom, Hal Smith, and later, Jim Larkin, as a counterculture response to the Kent State shootings in the spring of that year. Gary Brennan played a role in its creation. According to the 20th Anniversary issue of the New Times, published on May 2, 1990, Fiore suggested that the anti-war crowd put out its own paper. The first summer issues were called the Arizona Times and assembled in the staff's La Crescenta apartments across from ASU. The Arizona Times was renamed the New Times as the first college issue went to press in September 1970.</i>[5]<p>New Times pushes civil rights and personal freedoms including ending marijuana prohibition and calls out prohibitionists in Arizona, as well as other causes for personal freedom [7].<p>New Times has been kicking up dust on authoritarianism since the 70s. Lacey and Larkin also won a lawsuit against Arizona as recently as 2013 for false arrest which is still used to attack them [5].<p>> <i>In December 2013, the Maricopa County Board of Supervisors agreed to pay Phoenix New Times founders Michael Lacey and Jim Larkin $3.75 million to settle their false arrest lawsuit against the county defendants.</i><p>Take a look at their civil rights fund to see what I mean about how they take authority orgs/politicians to task and encourage civil rights [2]. They have been trying to take down the New Times, Village Voice Media and Backpage for nearly a decade and a half [5]. The whole report on Backpage, and their owners Lacey and Larkin who started New Times and Village Voice, might be a massive ad hominem [4]. It also appears to be an attack on owners of alternative media influence and funding for civil rights matters.<p>My guess is Lacey and Larkin, civil rights fighters that seem similar to Larry Flynt[2], won't let this just happen and they'll fight it. Most of the attacks on Backpage, and previously the New Times and Village Voice, attacks their character via ad hominems because they are causing trouble for authoritarians and pushing alternative news media funding, my guess is this takedown of Backpage is no different.<p>[1] <a href="https://www.azcentral.com/story/news/local/phoenix/2018/04/06/fbi-raids-backpage-founders-sedona-home-website-down/494538002/" rel="nofollow">https://www.azcentral.com/story/news/local/phoenix/2018/04/0...</a><p>[2] <a href="http://www.laceyandlarkinfronterafund.org/" rel="nofollow">http://www.laceyandlarkinfronterafund.org/</a><p>[3] <a href="http://www.phoenixmag.com/valley-news/i-fought-the-law-and.html" rel="nofollow">http://www.phoenixmag.com/valley-news/i-fought-the-law-and.h...</a><p>[4] <a href="https://www.hsgac.senate.gov/imo/media/doc/Backpage Report 2017.01.10 FINAL.pdf" rel="nofollow">https://www.hsgac.senate.gov/imo/media/doc/Backpage Report...</a><p>[5] <a href="https://en.wikipedia.org/wiki/Phoenix_New_Times" rel="nofollow">https://en.wikipedia.org/wiki/Phoenix_New_Times</a><p>[6] <a href="https://www.azcentral.com/story/news/politics/arizona/2017/04/20/gop-pressures-kyrsten-sinema-on-backpage-donations/100683704/" rel="nofollow">https://www.azcentral.com/story/news/politics/arizona/2017/0...</a><p>[7] <a href="http://www.phoenixnewtimes.com/news/here-are-the-prohibitionists-whove-donated-10-000-or-more-to-keep-marijuana-a-felony-in-arizona-8794628" rel="nofollow">http://www.phoenixnewtimes.com/news/here-are-the-prohibition...</a> |
Ask HN: Are there any reasonable alternatives to MacBook Pro for developer? | I bought a Sager (customized Clevo chassis) in 2010, and have just replaced it about a month ago with another Sager. My needs are somewhat different than pure editing/sw dev. I do quite a bit of analytics, visualization, and some CUDA bits. I could build a deskside (I generally prefer them), but I need to take my workstation with me.<p>Apart from $dayjob MBP, all my laptops/servers run Linux. So Linux compatibility is a must. Things which don't work should be unimportant to me (fingerprint reader). I use Linux Mint, as I don't want to be messing around with my primary machine, and everything just works with it. Best Linux desktop experience I've had in 18 years of running Linux desktops.<p>I opted for Sager/Clevo platform because of research, reviews, etc. I'll talk Dell, HP, and Toshiba below (which I've also owned).<p>Clevo platforms are mostly end user upgradable and servicable, so if you need more of something, with a screwdriver and some patience, you can add it. This probably doesn't make sense for the people whom are concerned about damaging their machines, though as someone whom has built machines for ~30 years now, this is old hat to me.<p>My 2010 model has 16GB ram, i7 quad core, NVidia GTX 560m , and now a SATA SSD, along with a PCI gigabit ethernet port, some sort of intel wifi card. It was showing its age, in that the GPU (on an MXM card) was starting to fail under load. I replaced CPU/GPU fans, cleaned the unit, though failure events are increasing, and the gigabit occasionally isn't recognized on boot.<p>Add to that this it runs hot and loud. The fans are always on, and slightly more than a whisper during idle. During heavy load, it can be loud. Not ideal for my situation. No usable effective battery life, call it about an hour if I am lucky. Screen resolution is 1920x1080 or something. I had plugged it into an old monitor on my desk (recently replaced with a HiBP 3.8k x 2.xk) and it ran 1920x1200 nicely.<p>It is heavy. And the battery clips don't keep the battery secure in the machine. So there's that.<p>I looked again in great depth at the options. Here is where I talk about my Dell experiences.<p>Every single Dell laptop I have ever bought, every single one, has had the infamous "unknown power supply" bug, which has only been curable by a motherboard replacement. These were high end workstations (4100), mid range consumer, and cheap consumer units.<p>The take-away. I cannot and will not recommend Dell. I will actively recommend against Dell. Their build quality generally sucks. Their ability to survive more than a year before needing a motherboard replacement is lacking. Their cases and keyboards are a bad joke. They are bulky, annoying, and not serviceable by mere mortals.<p>Linux sort of/kind of works on Dells. Not really, but hey, they market a ubuntu laptop.<p>HP has generally been reasonable, usually offering some insanely interesting combinations of things at good prices, but then making other choices on the same platform which require you hack crap hard to make the thing work. I loved my big HP laptop. I hated that it used a NIC that only had windows drivers. This was back in the PCMCIA days, and I was able to find workable pcmcia NICs and modems (yeah, really dating myself there ...).<p>I bought my wife and daughter Toshiba units one year to replace their failed Dells. Toshiba failed within 9 months of acquisition. Not serviceable, and Toshiba wouldn't honor its warranty. So, out to the dumpster with those.<p>We bought a pair of Samsung laptops to replace those. Nice specs but cheap plastic case, and both eventually died with chassis fractures.<p>By this time, I had had it with windows (7 pro) and its insanely broken networking. I gave them a choice on their next laptops: either Macs or Linux machines, as I was refusing to support windows any more. They played with my work MBP (linux at home on my laptop, MBP for work) and linux box. Chose MBP.<p>Cost me a bit more, but it just works (as do the linux boxen). Nearing the end of life for these units, and they are looking at new ones in a few months.<p>Short of it is, for their work, mostly editing, web stuff, etc. MBP is fine. Similar to SW dev in many ways (and daughter is getting into SW dev in college), so this works out well.<p>For heavy computation, analysis, visualization, my new unit is quite nice.<p>Sager NP8156. I upgraded from 16GB to 48GB ram (I run lots of VMs), and upgraded the WD 250GB SSD to 1.5TB of SSD. NVidia GTX 1060 with 6GB ram. USB C and USB3, integrated PCIe based NICs, good wireless. Easy to service. Runs linux mint 18.3 on a 3.8k x 2.x k monitor at high res. Even under load, it is quite quiet.<p>Downsides: 1) I didn't opt for the higher end display on the laptop itself. 2) Battery life isn't great (2 hours).<p>I brought it with me on a business trip to Korea a few weeks ago, for some of my dev/testing work, alongside my $dayjob MBP with emojibar (can't stand that thing). Better overall experience. I used it as a NAT/router for the team there with me, while running on it myself.<p>What would make it better would be a better screen res and a better battery. Otherwise, for me, its a perfect workstation replacement unit. |
Blockchain is not only crappy technology but a bad vision for the future | The criticisms of blockchain tech have become so incoherent that there may be cause for confidence again. Full circle!<p>> There is no single person in existence who had a problem they wanted to solve, discovered that an available blockchain solution was the best way to solve it, and therefore became a blockchain enthusiast.<p>It's amazing to me that this person finished this sentence and didn't, in the time it took to type it, realize how flatly wrong it is. Every corner of the USA, online and otherwise, has satisfied customers who were able to purchase psychoactive compounds online despite the prohibition against them. This is a wonderful application of blockchain tech and, as a solution, has produced many enthusiasts.<p>Moreover, society benefits generally from drug prohibition being undermined, and this too has produced more indirect enthusiasts.<p>However, this phenomenon serves to show the blockchain's fitness, not the limits of its reach.<p>Many of us spend our days working on blockchain tech that is far more mundane and unlikely to grab headlines, but also fits more cleanly into current legal and political structures. What has the author to say of that?<p>> The number of retailers accepting cryptocurrency as a form of payment is declining, and its biggest corporate boosters like IBM, NASDAQ, Fidelity, Swift and Walmart have gone long on press but short on actual rollout.<p>Borrowing from the previous confused critique: are any blockchain enthusiasts (regardless of their original fount of enthusiasm) also enthusiasts for these giant, largely outdated companies? The fact that these companies have tried and failed to integrate blockchain tech is, in the minds of most enthusiasts, a positive sign, not a negative one.<p>> Hm. Perhaps you are very skilled at writing software. When the novelist proposes the smart contract, you take an hour or two to make sure that the contract will withdraw only an amount of money equal to the agreed-upon price, and that the book — rather than some other file, or nothing at all — will actually arrive.<p>Not a critique of blockchain tech, but of open source software generally. And a bad one.<p>> “Keep your voting records in a tamper-proof repository not owned by anyone” sounds right — yet is your Afghan villager going to download the blockchain from a broadcast node and decrypt the Merkle root from his Linux command line to independently verify that his vote has been counted?<p>I'll note the racist overtones here without further comment.<p>I will, however, point out that no blockchain voting system has ever been proposed with such a UI.<p>> These sound like stupid examples — novelists and villagers hiring e-bodyguard hackers to protect them from malicious customers and nonprofits whose clever smart-contracts might steal their money and votes?? <p>Yes, they sound like stupid examples. They are stupid examples, designed precisely for use as a strawmen in this stupid essay.<p>> A person who sprayed pesticides on a mango can still enter onto a blockchain system that the mangoes were organic. A corrupt government can create a blockchain system to count the votes and just allocate an extra million addresses to their cronies. An investment fund whose charter is written in software can still misallocate funds.<p>I have no idea what point the author is making now. Can anyone clarify?<p>> The contract still works, but the fact that the promise is written in auditable software rather than government-enforced English makes it less transparent, not more transparent.<p>This has not been history's experience with government-enforced English. By ignoring this fact, the author allows himself to indulge in arguments that don't make sense in order to make points that don't matter. Such as...<p>> Eight hundred years ago in Europe — with weak governments unable to enforce laws and trusted counterparties few, fragile and far between — theft was rampant, safe banking was a fantasy, and personal security was at the point of the sword. This is what Somalia looks like now, and also, what it looks like to transact on the blockchain in the ideal scenario.<p>...and...<p>> Silk Road, a cryptocurrency-driven online drug bazaar. The key to Silk Road wasn’t the bitcoins (that was just to evade government detection), it was the reputation scores that allowed people to trust criminals.<p>But I think the author's most dangerous fallacy is actually his apparent fear that systems of "alternate" trust or consensus somehow threaten to push out systems in which trust is currently serving well. For example:<p>> Projects based on the elimination of trust have failed to capture customers’ interest because trust is actually so damn valuable. A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety is a not a paradise but a crypto-medieval hellhole.<p>I have met and worked with dozens of movers-and-shakers in blockchain tech, and I have never, ever met anyone who's politics are fairly characterized as advocating "A lawless and mistrustful world where self-interest is the only principle and paranoia is the only source of safety."<p>Quite the contrary: I think that there's a realization that small batches of community cooperation work quite well, and that trustless technology has the capacity to out-compete the violent regimes that threaten it, such as drug prohibition, fiat currency, and environmental recklessness.<p>The author concludes with the only paragraph in his essay which is sensible and sober:<p>> As a society, and as technologists and entrepreneurs in particular, we’re going to have to get good at cooperating — at building trust, and, at being trustworthy. Instead of directing resources to the elimination of trust, we should direct our resources to the creation of trust—whether we use a long series of sequentially hashed files as our storage medium or not.<p>On these points, we agree. However, these points are not supported by the rest of the author's essay.<p>Clear thinking and reasonableness need to be the orders of the day. Cooperation and compassion need to be the acts of our daily drive. And, so far as I can tell - the many scams and holdovers from yesterday's economy not withstanding - blockchain tech is full of these things.<p>We all want peace. We all want freedom. And yes, we all want trust and strong communities. The idea that blockchain tech is a threat to these things is confusion at best, fear-mongering at worst. |
Why SQLite Does Not Use Git | I am not qualified to judge whether Fossil is better than git and I can completely acknowledge that git has a step learning curve (although I feel that a big chunk of that learning curve is unlearning previous VCS experience).<p>But, now that I do know git the biggest change from I noticed from previous VCSes is how much I work on multiple issues in the same repo. Something that was extremely hard with CVS, SVN, P4 (10yrs ago).<p>A friend was struggling with git recently and ranting about it. He didn't get it and didn't understand why anyone would use it compared to what he was used to (non DVCS). I wrote him this analogy<p>> Imagine some one was working with a flat file system, no folders. They somehow have been able to get work done for years. You come along and say “You should switch to this new hierarchical file system. It has folders and allows you to organize better”. And they’re like “WTF would I need folders for? I’ve been working just fine for years with a flat file system. I just want to get shit done. I don’t want to have to learn these crazy commands like cd and mkdir and rmdir. I don’t want to have to remember what folder I’m in and make sure I run commands in the correct folder. As it is things are simple. I type “rm filename” it gets deleted. Now I type “rm foldername” and I get an error. I then have to go read a manual on how to delete folders. I find out I can type “rmdir foldername” but I still get an error the folder is not empty. It’s effing making me insane. Why I can’t just do it like I’ve always done!”. And so it is with git.<p>> One analogy with git is that a flat filesystem is 1 dimensional. A hierarchical file system is 2 dimensional. A filesystem with git is 3 dimensional. You switch in the 3rd dimension by changing branches with git checkout nameofbranch. If the branch does not exist yet (you want to create a new branch) then git checkout -b nameofnewbranch.<p>> Git’s branches are effectively that 3rd dimension. They set your folder (and all folders below) to the state of the stuff committed to that branch.<p>> What this enables is working on 5, 10, 20 things at once. Something I rarely did with cvs, svn, p4, or hg. Sure once in awhile I’d find some convoluted workflow to allow me to work on 2 things at once. Maybe they happened to be in totally unrelated parts of the code in which case it might not be too hard of I remembered to move the changed files for the other work before check in. Maybe I’d checkout the entire project in another folder so I'd have 2 or more copies of the project in separate folders on my hard drive. Or I’d backup all the files to another folder, checkout the latest, work on feature 2, check it back in, then copy my backedup folder back to my main work folder, and sync in the new changes or some other convoluted solution.<p>> In git all that goes away. Because I have git style lightweight branches it becomes trivial to work on lots of different things and switch between them instantly. It’s that feature that I’d argue is the big difference. Look at most people’s local git repos and you’ll find they have 5, 10, 20 branches. One branch to work on bug ABC, another to work on bug DEF, another to update to docs, another to implement feature XYZ, another working on a longer term feature GHI, another to refactor the renderer, another to test out an experimental idea, etc. All of these branches are local to them only and have no effect on remote repos like github (unless they want them to).<p>> If you’re used to not using git style lightweight branches and working on lots of things at once let me suggest it’s because all other VCSes suck in this area. You’ve been doing it so long that way you can’t even imagine it could be different. The same way in the hypothetical example above the guy with the flat filesystem can’t imagine why he’d ever need folders and is frustrated at having to remember what the current folder is, how to delete/rename a folder or how to move stuff between folders etc. All things he didn’t have to do with a flat system.<p>> A big problem here is the word branch. Coming from cvs, svn, p4, and even hg the word "branch" means something heavy, something used to mark a release or a version. You probably rarely used them. I know I did. That's not what branches are in git. Branches in git are a fundamental part of the git workflow. If you're not using branches often you're probably missing out on what makes git different.<p>> In other words, I expect you won’t get the point of git style branches. You’ve been living happily without them not knowing what you’re missing, content that you pretty much only ever work on one thing at a time or find convoluted workarounds in those rare cases you really have to. git removes all of that by making branching the normal thing to do and just like the person that’s used to a hierarchical file system could never go back to a flat file system, the person that’s used to git style branches and working on multiple things with ease would never go back to a VCS that’s only designed to work on one thing at a time which is pretty much all other systems. But, until you really get how freeing it is to be able to make lots of branches and work on multiple things you’ll keep doing it the old way and not realize what you’re missing. Which is basically way all anyone can really say is “stick it out and when you get it you’ll get it”.<p>> Note: I get that p4 has some features for working on multiple things. I also get that hg added some extensions to work more like git. For hg in particular though, while they added after the fact optional features to make it more like git go through pretty much any hg tutorial and it won't teach you that workflow. It's not the norm AFAICT where as in git it is the norm. That difference in base is what really set the two apart.<p>Sorry that was so long but my question for the Fossil guys would be "which workflow does Fossil encourage?" Lots of parallel development like git or like many other VCSes not so much parallel dev. Are branches light and easy like git or are they only meant for marking versions like the were in SVN, P4, CVS. Do branches even need to be related or can they be completely unrelated like gh-pages and the VCS won't complain that you're "off master" as hg does (did?) |
Ask HN: What do you wish you had taught/done for your children at an early age? | Welcome to parenthood and congratulations on your new little one!!<p>I have two stepchildren (8 and 6) and a 2 year old. I'm far from an expert and, if you want straight up truth, I can't think of any other relationships that have made me doubt myself so much.<p>But, if I could give you a bit of advice, first, I'd tell you to genuinely get to know your child, figure out what he or she loves and get as involved in it as you can. My stepchildren were a little bit older when I met them, so I didn't get the chance to see them transition from babies into little people, but seriously bud, I'm starting to cry just thinking about watching my little girl start to develop into herself.<p>The more that I've opened myself up to letting my kids guide their own development, the more they've driven my own development. In fact, looking back on the last four years, I'd argue that they were more involved in my development that I was possibly involved in theirs.<p>I'm a stutterer who hides it well, so I'm maybe a little more sensitive about speech issues than the average bear, but I got into a habit that some early childhood education types seem to love. Lauren (my two year old) is very vocal, but there are some sounds that she has trouble with, and so I try to make those sounds part of our play.<p>The hard 'C' and 'G' sounds are two that Lauren has some trouble with, so one of our favourite games is 'Claw'. My hands are 'Big Claws' and her hands are 'Little Claws'. The Big Claws love Little Claws, but, being hands, they don't have brains, so they don't understand things like their own reflections and they can only say 'grrrrr'.<p>The neat thing is that now, she likes playing 'Claw' too, so she will call them. In the early days, she clearly said 'Plaw', but with several months, it's getting closer to a hard 'C' and the growls, which used to sound like 'derrrr' are getting closer to 'grrrrrr'.<p>Music and dance are also pretty huge for us, but my girlfriend and I are both absolutely obsessed with music. I am a hardcore vinyl collector and, seriously bud, whatever you do, if you collect vinyl, DON'T SHOW YOUR TODDLER HOW TO SCRATCH. You might think it's a good way to teach him/her how to handle vinyls, but I have two dead records (including a formerly mint copy of Highway 61 Revisited) and two dead cartridges to serve as constant reminders of the errors of my ways.<p>Some of my favourite moments with my kids have been our almost daily dance parties. I'll throw on anything imaginable and figure out what they like to listen to as we go. You might be surprised by some of the music that your kids love. My eight year old stepdaughter likes pop music and trance (?), my six year old stepson likes guitar even more than I do - he's currently pretty obsessed with Soundgarden and Nirvana, so my next mission is to introduce him to Kyuss. My two year old likes traditional toddler stuff like Dora, Daniel Tiger, and Thomas the Train Engine, but she's also into some completely baffling stuff like Sepultura (Max Cavalera is the only singer she likes more than Daniel Tiger), Nailbomb (again with the Max Cavalera), the Cavalera Connection (are you seeing a trend?), the Melvins and the Wu Tang Clan. She's also really into traditional pow wow music (Young Bear is her favourite), traditional rockabilly (she used to nap to Johnny Cash and now she's developed a strong love for Buddy Holly and Jerry Lee Lewis) and psychobilly (there was a time when I genuinely thought that her first words would be Reverend Horton Heat lyrics).<p>I think that music and rhythm are really good for young brains and our dance parties are pretty amazing forms of exercise, but I think this all goes back to my first bit of advice. Introduce your kids to the widest range of things imaginable, figure out what they love, and I think that you'll be well on your way.<p>Edit - I forgot to mention two things. Lots of parents really love television/Youtube. We don't let our kids watch tons of television and we try to limit screen time in general, but sometimes, the screens will keep you sane. When your little one gets a little older, you might find yourself watching tons of children's television to see what is a nice blend of wholesome and educational. Of all the shows that I've watched, "Daniel Tiger's Neighbourhood" is my absolute favourite. The Mr. Rogers Foundation/Company/Whatever it is is heavily involved in Daniel Tiger. They're wonderful little shows. If Lauren had her way, she'd replace Music Man Stan with Max Cavalera, but hey, dare to dream.<p>The second bit of 'advice' I could give you is that it's okay to be frustrated by your little one. Kids are a big learning curve and it doesn't matter how much you love them, sometimes they'll be very frustrating. It's okay to feel like that and it's okay to vent. It's also okay (I'd say necessary) to be an adult with adult hobbies. Even when Lauren was very young, I still made a point of going out, watching the Yankees play and having a pint. |
Russia Bans 1.8M Amazon and Google IPs in Attempt to Block Telegram | Interesting to see which regions they're blocking, and how much they're blocking in each region. For IPv4 addresses only:<p><pre><code> Region Desc IPs Blocked % Blocked
ap-northeast-1 Asia Pacific (Tokyo) 1984800 786451 39.62%
ap-northeast-2 Asia Pacific (Seoul) 459024 131073 28.55%
ap-northeast-3 Asia Pacific (Osaka-Local) 65808 0 0.00%
ap-south-1 Asia Pacific (Mumbai) 524560 65542 12.49%
ap-southeast-1 Asia Pacific (Singapore) 1067552 425990 39.90%
ap-southeast-2 Asia Pacific (Sydney) 1147168 163852 14.28%
ca-central-1 Canada (Central) 196880 3 0.00%
cn-north-1 China (Beijing) 231456 0 0.00%
cn-northwest-1 China (Ningxia) 100368 0 0.00%
eu-central-1 EU (Frankfurt) 1049888 787013 74.96%
eu-north-1 EU (North tba) 65808 0 0.00%
eu-west-1 EU (Ireland) 3757344 1966319 52.33%
eu-west-2 EU (London) 393488 131087 33.31%
eu-west-3 EU (Paris) 131344 1 0.00%
sa-east-1 South America (Sao Paulo) 491808 65536 13.33%
us-east-1 US East (N. Virginia) 10317600 4260203 41.29%
us-east-2 US East (Ohio) 1179920 131079 11.11%
us-gov-east-1 AWS GovCloud (US, East) 65552 0 0.00%
us-gov-west-1 AWS GovCloud (US) 131088 32768 25.00%
us-west-1 US West (N. California) 1311536 196642 14.99%
us-west-2 US West (Oregon) 4917552 1769549 35.98%
Total 29590544 10913108 36.88%</code></pre> |
Man Accused of Making 97M Robocalls | Nomorobo founder here. I'm the "robocall guy" that everyone quotes in the news.<p>Instead of just popping my head into every comment, I wanted to give some insight into what's actually happening out there.<p>Background: I won the FTC Robocall Challenge back in 2013. I turned my prototype/idea into Nomorobo, which is the leading robocall blocker out there. In 4 years, we've stopped over 630 million robocalls from reaching people.<p>I'll actually be testifying at next week's House subcommittee hearing on robocalls.<p># Not picking up unknown numbers<p>Unfortunately, this is what people think is the best solution. Even the government gives this advice. #1 is to use a robocall blocker and #2 is to not answer numbers that you don't recognize.<p>That frustrates me to no end. That's not a good answer.<p>"Doctor, it hurts when I do 'this'. Then don't do 'that'."<p>I really get worried when people say they turn on Do Not Disturb. What if there's an emergency? You have no idea who needs to call you.<p>This is especially important for people with kids. Is it a school? Is it a neighbor?<p>This is an actual story that happened to me - no BS.<p>My Uncle wound up in the hospital last month. Ambulance had to come pick him up off the bathroom floor and everything. No one in the family knew it was happening.<p>I got a call from a number that I didn't recognize but, since I trust Nomorobo, I answered it. It was my Uncle telling me what happened, what hospital he was in, etc.<p>Damn good thing that I answered that call.<p>But the fun doesn't stop there.<p>A few hours later, I go over to his house to pick up a bunch of his stuff and bring it over to the hospital for him. While I'm getting things together, his old flip-style cell phone starts ringing. I figure it's one of his friends that's worried about him.<p>I answer it.<p>"Congratulations! You've won a free cruise."<p># Neighbor Spoofing<p>Yep - if the number shown is close to yours and it's not in your contacts, it's probably a robocaller. But spoofing is the norm with robocall scams.<p>Our algorithm detects over 1300 new robocalling numbers <i>every day</i> and they're basically all spoofed.<p>Whatever number is shown usually can't be called back. It usually doesn't belong to the actual robocaller. And they usually try to make the call take a confusing journey through "the tubes" that's impossible to trace back.<p>Neighbor spoofing is really hard for carriers to stop because a lot of people have sequential numbers and they don't want to accidentally block good calls.<p>It's especially confusing when people don't understand what's going on and they call back the unknown number:<p>Person A: Why did you call me?
Person B: I didn't call you. Why are you calling me?
Person A: Because you called me.
Person B: No, I didn't.
Person A: Yes, you did.
Person B: Leave me alone, crazy person. [blocked]<p>But, as an app on your phone, it's really easy for us to stop.<p>If you give us permission to look at your contact list (they never leave your phone and are never stored by us), then we can fully block all neighbor spoofed calls. If you don't give us that permission, we simply identify them as "Robocaller" whenever they call.<p># Wasting their time<p>Don't waste YOUR time. It will not help. You are trying to warm up the ocean by pissing into it.<p>The scale that these automated callers work at is unbelievable.<p>According to our stats, 40% of all calls in the US are spam robocalls.<p>Someone mentioned Jolly Roger Telco. We worked with them this holiday season to make www.DoNotCallChristmas.com. It was fun and it makes people feel good but that's about the only impact it has on the problem.<p># So...why don't the carriers just make the calls stop?<p>There are a lot of tech people here on HN. You know TFW someone says to you, "Oh, that's easy - you just hook up a database to a web page, right?"<p>"And put some that blockchains in there while you're at it."<p>Well, it's the same way with the phone system. It's not as easy as it looks.<p>The phone system has a birth defect. Call it one of the worst cases of technical debt, ever.<p>When it was first created, it was a closed, trusted system (with AT&T running the whole shebang). No one could imagine a situation where someone would lie about the caller ID so they didn't require it to be verified.<p>But then the system changed. Deregulation. The rise of VoIP. Interconnectivity.<p>Whoops. Toothpaste is out of the tube. It's tough to put it back in.<p>So, yes, secure and verified caller ID systems are being worked on (STIR/SHAKEN) but it's years off.<p>Will it reduce voice spam? Of course. But, this problem will NEVER go away.<p>People still get ripped off every day by thinking that the Nigerian Price is going to make them rich.<p># What about the Do Not Call list?<p>The laws that govern automated calling were written back in the early 90's.<p>I was using a 14.4k baud modem and connecting to BBSes at that time. The internet as we know it didn't exist. People were still paying by-the-minute for long distance phone service. Then technology rocketed past regulations.<p>Today, the Do Not Call list is virtually useless against modern robocalling scammers.<p>At the House subcommittee hearing next week, I'm actually going to advocate that the government scrap all the existing laws and rewrite them with a simple, plain thought:<p>High frequency, automated phone soliciting should be opt-IN, not opt-out.<p>You must have the express written permission of the person that you are calling. Period. Full stop.<p>It doesn't matter if your number is on the Do Not Call Registry or not.
It doesn't matter if you're calling a landline or a cell phone.
It doesn't matter if it's a business or a residential line.<p>Got permission? OK. Don't have permission? Nope.<p>Note: If you're sending a purely informational message, this doesn't apply. I'm only talking about commercial solicitations here. Think of it like preventing modern day door-to-door salespeople. Ride hailing notification, doctor's office reminders, schools closing, etc. just aren't the same type of thing.<p># Closing thoughts<p>So, in 2018, the best way to stop these types of scams is by using a robocall blocker. I'm <i>highly</i> biased here but I think Nomorobo is the best one out there. We understand the problem and the solution better than anyone else. It's completely free on landlines and has a 14-day free trial on mobile. On top of all that, we're extremely privacy friendly.<p>And, while I never blame the victims, there is a bit of a shared responsibility that we all have here. The more people that use robocall blocking technology, the less effective these scams will be. The less effective they are, the less of them there will be. It's a virtuous cycle.<p>Let me know if there's anything else that you'd like to dig into. I'm all ears. |
The Only Argument You Will Ever Need Against PHP | Your article vents about the frustration of working on someone else's code, when you have the advantage of hindsight and something that works at least somewhat. That has nothing to do with PHP. I think you should examine your assumptions.<p>> ...in 20 years, I've never encountered a situation in which I needed something PHP does that something else didn't.<p>Fine, but just a few paragraphs later you actually describe what PHP does that other things don't: allows less experienced/skilled programmers to get something to work quickly and deploy it easily. That actually has tremendous value to a lot of businesses.<p>> ...mopping up some hacked or otherwise messed up doo-dad or other, because the person who put it there couldn't.<p>Maybe the person who put it there just ranks way below you in terms of skill. Or maybe they had budget and time constraints and an anxious customer in a hurry to get something working on a $10/month shared hosting setup.<p>> I feel injured by this. I feel robbed.<p>With all respect, grow up. If you take code this personally -- especially someone else's code -- you show an immature and elitist attitude toward your profession.<p>> This kind of expertise arbitrage, where the skill level you need to set something up initially is nowhere near the skill level you need to fix it when it breaks, is pervasive in the software industry...<p>Yes, the entire profession of programming rests on what you call expertise arbitrage. It may derive from the difference between my expertise and my client's, or my expertise and the previous developer. We make money from the arbitrage. If you don't like that kind of work don't take on those jobs.<p>> If an attacker can smuggle a PHP file onto your document root, then they can execute it. If they can do that, then they own you. This attack vector cannot be eliminated. If you use PHP, you will always be fighting it. Forever.<p>If someone can smuggle a file into your document root or anywhere else on your server, you have a problem no matter what language. You can mitigate this risk easily with well-understood practices for PHP and server configuration. I have worked mostly with PHP for almost 20 years and have actually never seen this happen, though I have found sites that could allow it (and I fixed the problem easily). No one is constantly fighting this problem forever.<p>> Once again, this situation is not unique to PHP, it's just that PHP is where you're most likely to see it. This issue also isn't strictly about document roots, but more about the level of control over what code gets executed. It's the difference between a default-deny policy and default-allow.<p>People who work with web apps see more problems with PHP because PHP dominates the web application space by a very wide margin. The last time I saw a web app with huge security holes (200 failures in an automated security audit) it was Ruby/Rails, not PHP.<p>> PHP web apps can be made to run outside the document root just like anything else, and indeed this is how modern MVC frameworks operate. Sure they can, but then you obviate the point of using it. If you aren't going to be plunking files into your document root for immediate execution, you may as well use some other stack.<p>Sure. You miss an even more important reason for choosing PHP: the relatively mature library/frameworks available, and the huge number of PHP programmers relative to other web-ready languages. If I sell my client on a Haskell or Smalltalk solution because I hate PHP, I may have done my client a serious disservice because they will have a harder time finding someone capable of working on that. Or maybe you mean Java or Node.js or .Net, all of which have their own security and deployment issues.<p>> What kind of jobs though? Mopping-up jobs, of course. Moreover, on the other side of that job is an employer, who is more than happy to take advantage of all this competition. If you aren't working at Facebook, the Wikimedia Foundation, Automattic or Acquia, it's probably worth asking yourself, dear PHP developer, if you are being played.<p>Nope. First, 90% of programming amounts to what you call "mopping up" and what the rest of us call maintenance and enhancement. Limiting yourself to green-fields projects in your preferred language won't lead to employment for most programmers. And you misunderstand how competition for jobs works -- PHP developers can and do make just as much as people working with other stacks.<p>> Expertise arbitrage, though, irrespective of its substrate, is very real and very much a liability. This to me makes one's choice of stack more than just a matter of taste: it's an object of organizational design.<p>What does that even mean?<p>> And if that isn't good enough, I can tell you from experience that banning PHP will eliminate aeons of monotonous tweezing out of Russian dick-pill spam.<p>No, it won't. Banning open email relays would help with that particular problem, but that has nothing to do with PHP. Programmers will always vary greatly in skill level and we will always have a lot of low-quality software running in production. Get over yourself. |
Widescreen laptops are dumb | "As if all that wasn’t enough, there’s also the matter of tabs. Tabs are a couple of decades old now, and, like much of the rest of the desktop and web environment, they were initially thought up in an age where the predominant computer displays were close to square with a 4:3 aspect ratio."<p>Tabs are at least three decades old now, and they weren't originally restricted to just one edge of the window:<p>>Tab (GUI):<p><a href="https://en.wikipedia.org/wiki/Tab_(GUI)" rel="nofollow">https://en.wikipedia.org/wiki/Tab_(GUI)</a><p>>The WordVision DOS word processor for the IBM PC in 1982 was perhaps the first commercially available product with a tabbed interface. PC Magazine in 1994 wrote that it "has served as a free R&D department for the software business—its bones picked over for a decade by programmers looking for so-called new ideas". The NeWS version of UniPress's Gosling Emacs text editor was another early product, with multiple tabbed windows in 1988. It was used to develop an authoring tool for the Ben Shneiderman's HyperTIES browser (the NeWS workstation version of The Interactive Encyclopedia System), in 1988. HyperTIES also supported pie menus for managing windows and browsing hypermedia documents with PostScript applets. Don Hopkins developed and released several versions of tabbed window frames for the NeWS window system as free software, which the window manager applied to all NeWS applications, and enabled users to drag the tabs around to any edge of the window.<p>Notice the layout of the overlapping Emacs windows with tabs sticking out of their right edge: since text is much wider than it is tall, you can stack up a lot more tabbed windows with tabs on their left or right side, and still read their labels.<p>>HCIL Demo - HyperTIES Authoring with UniPress Emacs on NeWS:<p><a href="https://www.youtube.com/watch?v=hhmU2B79EDU" rel="nofollow">https://www.youtube.com/watch?v=hhmU2B79EDU</a><p>>Demo of UniPress Emacs based HyperTIES authoring tool, by Don Hopkins, at the University of Maryland Human Computer Interaction Lab.<p>Notice how you can drag the tabs of these NeWS windows to any edge or proportion of the height or width, and use the tab as a proxy for the window by popping up a window management pie menu on it, even if the rest of the window is covered:<p>>NeWS Tab Window Demo:<p><a href="https://www.youtube.com/watch?v=tMcmQk-q0k4" rel="nofollow">https://www.youtube.com/watch?v=tMcmQk-q0k4</a><p>>Demo of the Pie Menu Tab Window Manager for The NeWS Toolkit 2.0. Developed and demonstrated by Don Hopkins.<p>Notice how the tabbed windows can be stuck on the visual PostScript stack like a short order chef's "spike". The tabs enable "direct stack manipulation": and they are constrained (by "snap dragging") to slide up and down on the stack (rearranging the order of the items on the PostScript stack), and can be pulled far enough away that they "pop" off the stack, or dragged back onto the stack so they snap into place, and you can directly drop them into any depth of the stack.<p>>PSIBER Space Deck Demo:<p><a href="https://www.youtube.com/watch?v=iuC_DDgQmsM" rel="nofollow">https://www.youtube.com/watch?v=iuC_DDgQmsM</a><p>>Demo of the NeWS PSIBER Space Deck. Research performed under the direction of Mark Weiser and Ben Shneiderman. Developed and documented thanks to the support of John Gilmore and Julia Menapace. Developed and demonstrated by Don Hopkins.
Described in "The Shape of PSIBER Space: PostScript Interactive Bug Eradication Routines".<p>>The Shape of PSIBER Space - October 1989.
PostScript Interactive Bug Eradication Routines.<p><a href="http://www.donhopkins.com/drupal/node/97" rel="nofollow">http://www.donhopkins.com/drupal/node/97</a><p>>There is a text window onto a NeWS process, a PostScript interpreter with which you can interact (as with an "executive"). PostScript is a stack based language, so the window has a spike sticking up out of it, representing the process's operand stack. Objects on the process's stack are displayed in windows with their tabs pinned on the spike. (See figure 1) You can feed PostScript expressions to the interpreter by typing them with the keyboard, or pointing and clicking at them with the mouse, and the stack display will be dynamically updated to show the results.<p>>Objects on the PSIBER Space Deck appear in overlapping windows, with labeled tabs sticking out of them. Each object has a label, denoting its type and value, i.e. "integer 42". Each window tab shows the type of the object directly contained in the window. Objects nested within other objects have their type displayed to the left of their value. The labels of executable objects are displayed in italics. [...]<p>>Tab Windows<p>>The objects on the deck are displayed in windows with labeled tabs sticking out of them, showing the data type of the object. You can move an object around by grabbing its tab with the mouse and dragging it. You can perform direct stack manipulation, pushing it onto stack by dragging its tab onto the spike, and changing its place on the stack by dragging it up and down the spike. It implements a mutant form of "Snap-dragging", that constrains non-vertical movement when an object is snapped onto the stack, but allows you to pop it off by pulling it far enough away or lifting it off the top. [Bier, Snap-dragging] The menu that pops up over the tab lets you do things to the whole window, like changing view characteristics, moving the tab around, repainting or recomputing the layout, and printing the view.<p>>Designing to Facilitate Browsing: A Look Back at the Hyperties Workstation Browser.
By Ben Shneiderman, Catherine Plaisant, Rodrigo Botafogo, Don Hopkins, William Weiland.<p><a href="http://www.donhopkins.com/drupal/node/102" rel="nofollow">http://www.donhopkins.com/drupal/node/102</a><p>>Since storyboards are text files, they can be created and edited in any text editor as well as be manipulated by UNIX facilities (spelling checkers, sort, grep, etc...). On our SUN version Unipress Emacs provides a multiple windows, menus and programming environment to author a database. Graphics tools are launched from Emacs to create or edit the graphic components and target tools are available to mark the shape of each selectable graphic element. The authoring tool checks the links and verifies the syntax of the article markup. It also allows the author to preview the database by easily following links from Emacs buffer to buffer. Author and browser can also be run concurrently for final editing. [...]<p>>The more recent NeWS version of Hyperties on the SUN workstation uses two large windows that partition the screen vertically. Each window can have links and users can decide whether to put the destination article on top of the current window or on the other window. The pie menus made it rapid and easy to permit such a selection. When users click on a selectable target a pie menu appears (Figure 1) and allows users to specify in which window the destination article should be displayed (practically users merely click then move the mouse in direction of the desire window) . This strategy is easy to explain to visitors and satisfying to use. An early pilot test with four subjects was run, but the appeal of this strategy is very strong and we have not conducted more rigorous usability tests.<p>>In the author tool, we employ a more elaborate window strategy to manage the 15-25 articles that an author may want to keep close at hand. We assume that authors on the SUN/Hyperties will be quite knowledgeable in UNIX and Emacs and therefore would be eager to have a richer set of window management features, even if the perceptual, cognitive, and motor load were greater. Tab windows have their title bars extending to the right of the window, instead of at the top. When even 15 or 20 windows are open, the tabs may still all be visible for reference or selection, even thought the contents of the windows are obscured. This is a convenient strategy for many authoring tasks, and it may be effective in other applications as well.<p><a href="https://upload.wikimedia.org/wikipedia/en/2/29/HyperTIESAuthoring.jpg" rel="nofollow">https://upload.wikimedia.org/wikipedia/en/2/29/HyperTIESAuth...</a><p>Unfortunately most of today's "cargo cult" imitative user interface designs have all "standardized" on the idea that the menu bars all belong at the top of the screen and nowhere else, menus items should layout vertically downward and no other directions, tabs should be rigidly attacked to the top edge and no other edge, and the user can't move them around. But there's no reason it has to be that way. |
Be likeable or get fired | I have found myself in a similar situation. The caveat though is that when hired in I was the replacement for the individual everyone liked. So from day 0, I was disliked. This, put me behind the eight ball with not ever having even the slightest chance to ever get to share my knowledge and experience to anyone. I found myself labeled with having a communication problem, due to my other coworkers gossip and hearsay spread behind my back.<p>The truth of the matter, is even if you are more technologically ahead, on par or behind you counterparts in a company you are screwed when you are not liked.<p>There are only a handful of people I have met that are truly honest, with themselves and others. In which leaves everyone else never being capable of accepting new or different ideas, or to the point any type of criticism because they honestly can not do a self reflection.<p>People are more likely to believe in lies, than the truth.<p>If you find yourself not being liked, regardless of the reason you became not liked it is time to go. Be it on day one, or day 999+<p>If you find your suggestions, comments, criticism, are not received, do not argue with idiots and try win your case with a follow up. If they were not receptive with a feel-felt, or pathos,ethos,logos forms of communication then your going through and exercise of futility.<p>My suggestion for anyone before even thinking of accepting a new position, during the interview ask the following,<p>1.why is the position open? Growth? Replacement of a but in a seat? If the later; ask was it a resignation or a termination.<p>2. Ask could I have or see a employee handbook? You can gleam a lot about company culture just by reading. Red flags on things like dress code restrictions (when the position my not ever be public facing). Ie a long exhausting list of can not wear. Unfair things like banning shorts, but not banning skirts. Dress code should be just stated by job role, business professional, business casual, relaxed business casual, casual. Look for phrases/wordings that might lend towards management deciding to not deal flexibly per employee, as that the management/leadership has taken a stance that a past employee ruined it for everyone style. Look for anything that will make you uncomfortable working there.<p>3. Ask for their SDLC documentation/process/methodology, coding style guidelines, and the technology stack they use? If the can’t produce or speak in detail or are vague on details. another red flag, that they might be cowboy coding, silo building, bus factor of one per process, etc... you want to make sure you are not walking into a environment with a nightmare scenario where it is a one to one of a single dev to a project and only that dev touches/knows anything about it. As that there is typically resource constraints, poor to no time allocation for code revise,testing, documentation. As well a the technical debt/corner cutting that goes along with not having good project/sdlc management practices.<p>4. What is the process for changing the any of the current sdlc, methodology,practices, guidelines,etc...? Explain that you’ll be the fng (freaking new guy/gal) and that you will be bringing your knowledge in, as well as absorbing the new to you sdlc,process,yadda yadda and you want to understand how new and possibly different ways of doing things (innovation) is incorporated. You are trying to spot what the parent article fell into, by avoiding becoming the unliked team member. You also want to know if the think the word “standard” means etched in stone, never changing. Standards do change, and are revised. I have never seen a standard that does not have a revision number.<p>5. Blantently ask; what is your stance on open source software? Followed up with what is your take on Microsoft moving to open source and being a Linux foundation member? You want to make sure you know if they are current in the tech news and events. You want to make sure they are not using dead technology stacks, or SSIS for things beyond it’s intention,or have drank the MS kool-aid from 10-20 years ago and missed the memo that they were wrong about Linux being a cancer. Your also trying to spot the I got a tech job mindset, I know only XYZ and have lost the ability to learn and stay up to date with current trends in the industry. IE there is no such job title as database admin, database administration is a hat/role the devOps, software engineers wear when needed. You do not want to find yourself having to use ssis for application development when it is a database management tool, or having to maintain sql stored procedures that on average 5000~ lines long because they only learned t-sql. You are trying to spot if the company is a right tools for the right job, or we spend 200,000 on a hammer so everything is hit with the hammer.<p>Final thoughts; to avoid the problems I once encountered as well as what the parent post went through. Walk-in to any interview you get with the mindset of “You are interviewing them, they are not interviewing you.” Go in with good questions about the company, culture like I suggested above to see if they are a good fit for you and the working conditions you want to work or work best under. Be honest, be humble, don’t bullshit, and it’s ok to say I don’t know I would have to research and learn. End of the day, If the answer seems like a no, then it’s a no, don’t waste your or their time. |
A brief list of what Scrum gets wrong | I'm disappointed that this throwaway list of unsubstantiated bullet points is being treated seriously, but seeing as it is, here's "A brief list of what this article gets wrong about Scrum":<p>1) The author makes the unfortunately all-too-common mistake of confusing agile development with software design. Agile says very little about software design, other than observing that it is incremental and iterative rather than revelatory and monolithic. Scrum, as an example of agile practice, is primarily concerned with implementation and delivery. The Product Owner can consult anyone they want in the course of designing the software, and of course that includes the engineering team. Scrum encourages the PO to seek the participation of all stakeholders in sprint planning and review, but other than that, design is left to the PO's discretion.<p>2. Nothing in scrum discourages innovation, and agile certainly encourages innovation to a much greater degree than the detailed spec-driven alternative.<p>3. There is actually a grain of truth to this one, but it confuses what happens within one sprint with the overall process of software evolution over many sprints. What the author is complaining about here is basically Scrum's version of YAGNI ("You Ain't Gonna Need It"). It's the principle of doing the simplest thing that could work first, and then elaborating on it. My 35 years of software development experience tell me that this is a sound approach. Others may disagree.<p>4. Scrum does not require the "dividing every task into small items that can be completed by anyone on the team". It definitely encourages dividing work into meaningful pieces with tangible, testable results. But nothing prevents a team from matching work with the team members most suited to doing it. The author seems to be complaining about working on a team in general, where the credit for the product must be shared with others. Nothing about scrum encourages 'lack of ownership' to any greater degree than any other team-based development process.<p>5. This is simply wrong. Scrum encourages modification and explicitly proves the flexibility for it by ensuring that you're never more than a sprint away from changing direction if it becomes clear that's necessary. And of course, if the team is <i>really</i> blindsided by new information, the PO can cancel the current sprint and start a new one.<p>6. Scrum explicitly encourages self-reflection through the retrospective that occurs every sprint. The author here is apparently ignorant of this aspect, so maybe it was not practiced in his company.<p>7. Scrum requires minimal management. The Scrum Master is not intended as a manager, they are intended as a way to short-circuit bureaucracy so that any impediments to the team's progress can be removed. And I've never seen any software development process of any scale that doesn't have a Product Owner or a team lead, though the titles may change.<p>8. There are lots of bad tools. Don't use them. Find good ones, if you need them. Scrum <i>requires</i> very little, a whiteboard and some sticky notes will do.<p>9. Scrum does not discourage (or encourage) bug fixes or refactoring. As the author notes, that's at the discretion of the Product Owner. Again, that's always true, whether you're using Scrum or some other software management system.<p>10. The Product Owner is responsible for delivering the product on time. Of course they are accountable, and I guarantee that if they're at all competent, they're doing far more reporting and documentation for their superiors than the team is for them.<p>11. Scrum assumes none of these things. Not a single one. Who has a Scrum Master at every engineering meeting, for instance? Ridiculous. The Scrum Master runs the daily scrum, which is a 15- or 20- minute daily sync up. They don't attend any other technical meetings unless someone wants to invite them for some reason.<p>12. Of course scrum points aren't meaningless; why would anyone bother to track them if they were? They tell the team what it's velocity is, which is useful to the team for sprint planning, and useful to management for resource planning.<p>13. Scrum disempowers the best developers from...what, exactly? Scrum is designed to allow devs to focus on delivering, so they don't have to guess at what's being requested, and so they get quick feedback on what they've implemented. At all times, they know how they're doing. There's nothing disempowering about scrum.<p>14. The Manifesto doesn't say "no processes or tools". The scrum process is very light compared to the spec-driven process that it replaced. And as noted above, it doesn't require many tools. The fact that people feel like they need ten Atlassian services to do it isn't Scrum's fault, any more than the fact that you bought a Total Gym and a ThighMaster means that fitness is bunk.<p>15. Any developer who truly believes that "any task that has been done before in software does not need to be redone because it can be easily copied and reused," and that any new task cannot be estimated, should really find another profession.<p>Honestly, I think the article is a troll. For the author's sake, I hope so. If not...well, he says he's a Programmer and Dog Owner. I hope he's a really good dog owner. |
A guide to building a fast electric skateboard at home | Late to this party, am into this hobby, you <i>absolutely must</i> heed the following:<p>1. learn to ride a non-powered skateboard or longboard first. I guarantee you, you do <i>not</i> have the balance or skills to ride a longboard at 10kph off the bat, let alone 50kph. I've snowboarded and wakeboarded for a significant chunk of my life and I'm telling you, longboarding, especially fast longboarding, is not an easy skill to master, let alone pick up as a twenty-something.<p>2. allsunny's comment on learning to fall is spot on. You need to know, instinctively, how to bail at different speeds. At certain speeds and obstacles you can run off, at others you have to tuck and roll, still others you will rely on your knee pads or elbow pads to protect you from impact. allsunny's comment is spot on for other reasons but for now I want to move on to...<p>3. protection! Always wear your protective gear! I wear wrist guards, knee guards, elbow guards, and a helmet (not full face, unfortunately), every time I ride. I don't care how good you are, if you are in your mid twenties and older, a fall on concrete is really going to mess you up. Not that it doesn't mess you up when you're a kid, just that kids have a greater potential to bounce back from these things...<p>OK, now on to the article itself. There's a lot left unsaid, leaving you with the false impression that you have a proper and informative guide. Unfortunately, it barely scratches the surface.<p>(A) Hub vs belt: hubs have more cons than just heat. Hub motors tend to get really beaten up since your motors are taking the impact of the terrain. The urethane, or "tyre" part of the hub, is also typically not easily replaceable, falls off easily, or wears badly. That's kind of changing now but for the most part if you get a cheap hub you're going to have to replace the whole thing. Furthermore, the urethane is typically very thin around hub motors and your ride quality is going to suffer as more vibrations are transmitted up.<p>Hubs are really quiet, though.<p>Belt drives have many advantages. The motors get beat up less (unless you don't have enough clearance). You can "gear up" or "gear down" a belt drive (albeit not while in motion), giving you more speed and torque options. The article makes a big deal about dust and dirt and swapping belts but most belt drives have some kind of cover and belts are cheap to replace. Most importantly, you can buy wheels from companies who actually know what they're doing in terms of urethane formulation, grip, comfort, etc. You have <i>options</i> with a belt drive.<p>They can be really noisy, though.<p>Remember if you get a belt drive that you need a way to maintain tension. If your kit doesn't let you do so (e.g. diyeboard kits), it's... well it's not worthless, but you'll be swapping belts out a lot.<p>(B) ESCs: there are ESCs, and there are ESCs. The VESC the article author mentions is indeed open source hardware and firmware, but the quality of the hardware you get is really hit and miss depending on who you buy it from - some VESC hardware cannot run in certain firmware-settable modes and will go up in flames if you try (look up FOC). Furthermore, there are so many versions of the hardware out there that it can be a nightmare to find the compatible firmware for it. And finally they're usually expensive as.<p>On the other hand, the cheap chineseum ESCs have their own set of problems and limitations. While cheap, they are typically not as configurable as VESCs, and sometimes lock you in component wise to the vendor's radio control system, drive and battery combo.<p>(C) Batteries: do not buy cheap eBay hobby batteries. I've played with these in the context of RC buggies, planes and drones, and they are really hit and miss. It is fine in the context of RC toys which you play with in a safe environment and never for more than a few minutes at a go, it's not so fine in the context of a moving EV you're standing on going at 50kph, being constantly hammered by road conditions and put out in the hot sun for half an hour to hours at a time. Unless you have taken appropriate precautions and screened the batteries and otherwise know what you're doing, I wouldn't give them a second glance.<p>You can look up forums for reliable battery suppliers and make your own cells or buy from those. If you want to go the DIY route, GA cells apparently do pretty well, but I wouldn't put a pack together without thinking about cell-level fusing, having either special soldering equipment or a spot welding setup, strain relief, wire gauge selection, etc. If you set it up wrong, you're going to end up with too much current running through too thin a conductor and having them glow with the heat under load. I've seen it. It's cool, but also scary... mostly scary.<p>(D) Oh yeah, you're gonna need a battery charger. What do you mean, you don't have an adjustable chemistry battery charger just lying around at home so you can make a choice from a variety of available battery chemistries?<p>Alternatively you can source a battery management system which is a circuit matched for your battery chemistry, and just plug it in to a properly set up power supply. But then you have to ensure the power draw of the drive system is set up for it.<p>That said, one potential advantage of having a battery management system is the ability for you to take advantage of regenerative braking capabilities of your ESC safely.<p>(E) Remote control. The article didn't mention this, but you're going to need one. Please don't buy a bluetooth controller, a signal hiccup at 50kph and your skin could be a smear on the floor.<p>Speaking of hiccups, you oughta determine, and set, the failsafe behaviour on your remote controlled EV. The typical failsafe mode is to freeroll, which means if it fails you'd better be able to ride and stop your board without having to rely on the electric crutches. Callback to allsunny's comment.<p>(F) Deck, wheels and trucks. So you're putting together your 50kph drive and decide to slap it on a donor deck. So easy amirite?<p>For heaven's sake don't do this. Longboard decks come in many shapes and sizes. Again, if you're the kind to have ridden around on non-electric skateboards or longboards, you'll have a very good idea of the available options. If not, you have to do your research. The kind of setup that is fun to ride at 10kph is decidedly deadly at 50kph (look up "speed wobble"). The wheelbase, deck shape (both lengthwise and cross-sectionally), ground clearance, softness of wheels, width of truck, stiffness of bushing, geometry of truck, etc., all make a difference. There is a fair mix of personal preference and objective necessities too.<p>(G) Putting it all together. It's not a PC, you can't just plug in the components and have them work. You'll want to weatherproof the components somewhat, protect them from impacts. There are also miscellaneous parts you may want or need like switches, anti-spark connectors, battery level indicators, etc.<p>One thing often missed in assembly, for e.g., is that people buy cool looking carbon fibre decks, crowd their receiver next to the dense batteries or high current wires, and then wonder why their low powered radio transmitter and receiver only works intermittently. Not all radios, and not all decks, but some.<p>In conclusion: there are a whole lot of miscellaneous matters that aren't mentioned in the post and I didn't want people reading it to think it was going to be a single-blog-post level of simplicity to jump on a 50kph electric skateboard and off you go.<p>Last word, I promise: 50kph is scary. All of you downhill longboarders who have earned your way through the school of hard knocks know this. For the rest of us, 20-25kph is plenty fast and hard to react on if you haven't learnt to ride on a non-powered longboard or skateboard. Have fun, and take it easy! |
Tesla Turbine | The Trouble with Tesla:<p>Like a number of people I have gotten together a number of flat disks to try and make a working Tesla turbine (yes it was easy to build and it did work.) and then I tried hitting the web-sites for some information to improve it.<p>First problem I ran into, many of these sites seem to be ran by "true believers", they lack not only turbines doing real jobs, but even real numbers for the machines they have built.<p>Some of the claims are for more energy out than energy in, something I know is impossible! When you see that claim, you know you are dealing with a kook or a con-artist. Second problem, a lot of them seem to think that they are going to make fortune when everyone starts to using Tesla turbines instead of IC engines. Because of this they have on purpose left out some of the construction details on their web-sites. Anyone who tries to build a Tesla turbine without figuring out the fine details will have piece of junk as far as performance is concerned.<p>Third problem, I have <i>NEVER, EVER</i> seen what appears to be a reliable report on the performance you can expect from a Tesla turbine. Can it be 25% efficiencies? I believe so. Can it be a lot more? I don't know. Additionally, I do see a lot of problems trying to get it to be better. It just is not worth the money and effort to me personally to try and do all the things needed to find or create the better designs.<p>________________________________________<p>Common problems promoters of Tesla turbine designs do not mentioned:<p>Disk Mass:<p>Because it is easy to stack a large number of disks in a small volume some of the web-sites about Tesla turbines like to report in Horsepower per cubic inches to show how much better than IC (internal combustion) engines Tesla designs are. Of-course in real life how small your engine is matters less than how heavy it is, and it is easy to make a heavy Tesla turbine. In terms of horsepower per kilograms weight Tesla turbines do not do that well unless the disks are very, very thin. Thin but strong disks however turn out not to be so easy to make or cheap in costs as you would think at first.<p>Disk Speed:<p>Tesla turbines can spin very fast when they are not under any loads. Very fast indeed! Because of this it is easy to surpass the strength of most common metals and plastics used to make the disks just by using input feed pressures in the 100s of PSI if you are using compressed air or a steam boiler. Once a high speed turbine starts to fail due to centrifugal effects any unbalanced forces will quick tear the disks apart to the point that the breakup of the disks acts more like an explosion than anything else. This high speeds problem add two more concerns - balanced disks & gearing down the speed.<p>Balanced Disks:<p>The high rate of rotation means any imbalance in any of the disks will generate large side forces that even if they do not interfere with the operation of the turbine, they will drastically increase the wear and tear on the bearings holding the main shaft of the turbine. Gearing Down: Tesla turbines are well know for their very high speed/low torque output. One needs a very high gear ratio to reduce the rotation rate down to something usable with most machinery, but one does also gain a high torque on the geared down output. However, the low torque of the Tesla turbine's main shaft means it is very sensitive to the friction of the main pickup gearing and bearing, plus that gearing will need to operate at a high rate of speed as well.<p>Disk Spacing:<p>The performance of the Tesla turbine depends on the boundary layer, a simple rule (not a fixed one) is the best performance is with a gap between the disks of 3-5 times the thickness of the boundary layer. However, this layer's thickness depends on what fluid you are using (i.e. air, water, steam, condensing steam, combustion gases, ....), it's speed when injected, it's temperature at all points inside the turbine, and the same for the pressure and still a number of additional factors. I noticed very, very few sites try to figure how thick this layer is, then space out the disks according to that data.<p>Flat Disks:<p>Now here is the true real killer of Tesla turbine efficiency. Imagine a simple Tesla turbine, a stack of 11 flat disks each 10 inches in diameter with a .1 gap between them and an exhaust hole in the center 1 inch in diameter. The surface area for the driving fluid entering the Tesla turbine is the (Circumference (or Diameter times Pi) * the width of the input area) = (10<i>Pi)</i>(10<i>.1) = 10</i>Pi square inches or about 31.4 square inches. But the exit hole is only one tenth the size. So any fluid entering a Tesla turbine gets slowed down while inputting energy into the Tesla turbine and at the same time must exit from the smaller area of the exhaust hole. But notice it gets worse, the area of disk gaps doing the exhausting is (1<i>Pi)</i>(10<i>.1) = Pi or about 3.14 square inches but the area of the hole is .5</i>.5<i>Pi or about .79 square inches. Exhaust Hole Size: To solve the two above problems we have to do two things, both affect the efficiency of the turbine. First we have to make the Exhaust hole closer in size to the surface area of outer surface. So we get sqrt(10Pi/Pi) = sqrt(10) = about 3.16 in radius if we exhaust from one side only. This of-course does not take care of the surface area of the exhausting disk gaps which are still too small at 6.32</i>Pi<i>10</i>.1 about 19.87 square inches. To make up for this we need to taper the thickness of the disks so as to increase the gap size between the disk's surfaces as they near the center.<p>________________________________________<p>Problems resulting from trying to fix the above:<p>Non-Flat Disks: Making the tapered disks is no longer the simple design most people claim the Tesla turbine as being. It requires a precision lathe instead of just cutting out sheet metal.<p>Balance and vibration:<p>The metal disks need to be make of a very uniform material so that they can be balanced easily.<p>Boundary/Gap spacing:<p>With tapering disks you start to lose the ideal gap for efficient operation, if you change the taper to increase the flow the efficiency drops and the total power output drops. If you don't taper the disks to get efficient power conversion you get restricted flow and the total power output drops. Balancing how much taper is enough is again something I see missing off all the web-sites out there. Add in the fact that if you are using steam, combustion or other hot gases the nature of the fluid flow changes in different parts of the Tesla turbine as energy is extracted and gases cool because of this. Your disk design just became a major job.<p>Exhaust Size:<p>K.E. = .5<i>Mass</i>V^2 as the fluid in a Tesla turbine tends to flow at the same rate the disks spins. If the exhaust hole is .1 the size of the outer diameter, a very rough guess is that the fluid exits with only .1^2 = .01 or 1 percent of the original K.E. This suggests that we can convert 99% of available power to useful output. That is why you see people raving about how great Tesla turbines are going to be. However, at that size of an exhaust the outflow is very restricted by the small exhaust hole and we get very little power. The redesigned exhaust is .632 the diameter of the input giving us the exhausting fluid as still containing 40 percent of the original K.E. So we have already seen the turbine drop from 99% to about 60% efficient. The restricted area exhausting from the disks into the exhaust hole 'suggests' 19.87/31.4 or another one third drop in efficiency. So already we see a big drop in possible performance and the disk gap issues will only make things worse.<p>Disk Surface finish:<p>Again something rarely looked at. It should be clear that if the disk surface was perfectly smooth there would be very little drag to transfer K.E. to the disks. However, too much drag just turns the K.E. to heat in the disks and fluid. So what is the best finish to have on the disks? I have a guess, but that is all it is.<p>________________________________________<p>Conclusions:<p>Building a simple but low efficiency Tesla Turbine is easy, however the moment you decide to make a real power plant from one the work needed is on the same order, maybe more to develop and build a I.C. engine. Tesla turbines do have a lot going for them, but high end designs are not that easy to make otherwise lots of people would be using them today already.<p>Earl Colby Pottinger<p>Reference <a href="https://www.physicsforums.com/threads/tesla-turbine-efficiency.238364/page-2" rel="nofollow">https://www.physicsforums.com/threads/tesla-turbine-efficien...</a> |
A blockchain is a specific set of choices suitable for a narrow set of use-cases | <i>As original link is not available, posting the content from Google Cache for future reference</i><p>Thread by @clemensv: "I've talked to a lot of distributed systems engineers (who build cloud-scale stuff) from across the industry about blockchain. While most pl […]"<p>I've talked to a lot of distributed systems engineers (who build cloud-scale stuff) from across the industry about blockchain. While most platform folks I talked to are perfectly happy to help with frameworks that help selling product or services, ...<p>... and some even rode the crypto wave to make some hay, I have a hard time finding people in the distributed systems platform community who believe that blockchain is even remotely as significant as the hype wants to make us believe.<p>Engineers at the center of the industry understand the qualities of append-only logs, understand signatures and non-repudiation, and understand consensus protocols. They understand that those are a few building blocks from a large toolbox and that there are choices for each.<p>A "blockchain" is a specific composition of specific choices for these building blocks that's suitable for a fairly narrow set of use-cases. The blockchain-specific consensus models are tailored to (a) all-around lack of trust and (b) convergence to a singular global log.<p>Convergence to a singular global log with many candidate writers and many replicas makes it hard to reach consensus and for that consensus to be propagated; PoW or PoW+PoS (cleverly) solve that by being intentionally slow.<p>The PoW lottery is probabilistically set up so that a singular mining winner can emerge and its winning block can be propagated throughout the network before another miner finds a competing solution (there can be many valid ones). The enabler for that is a time window.<p>The tradeoff the "nobody is trusted" blockchain model makes, is that it literally trades trust for time. It's slow by design. Giving consensus forming ample time is foundational. (I'll be happy to hear arguments that prove the contrary).<p>Once parties trust eachother to faithfully collaborate, the existing consensus models that we all use to build hyper-scale cloud systems become applicable and those can resolve even complex and contested consensus problems in a few milliseconds, largely gated by network latency.<p>Even if there's no all-around trust, there's often a neutral party that can supervise a transaction of two parties that don't trust eachother, but who each trust that party. "Sidechains" and "Private permissioned blockchains" are playing that trick.<p>However, once there's a trusted neutral party, that trusted neutral party can already establish non-repudiation by forcing authentication/authorization along with a content signature and only ever allowing appends. With any regular old database that the neutral party maintains.<p>The world's economy today is built on the very principle that accounting records are both safe from deletion and immutable in digital accounting systems. There's a clear sense of order. They are written to audit logs for non-repudiation proof.<p>The cryptographic chaining of records is a good idea to strengthen the immutability of the ledger as a whole, but it's not really superior to holding a ongoing full copy of the ledger in safe escrow. A signature chain can be done on any existing database.<p>The single global log requirement and therefore the global consensus problem completely falls apart in all cases where the problem is partitionable (FWIW "sidechains" are partitions). Turns out, that's true for most problems, otherwise nothing would ever scale.<p>Once you can use partitions, the consensus scope shrinks to the scope of the partition. When you transfer money from your bank account, the intial scope is just your bank, with the initial transaction to a clearing account. (FWIW, a "sidechain" is a clearing account). Easy.<p>For those familiar with (shall I say "classic"?) distributed systems architecture, it's amusing to see things like "sidechains" with local trust relationships emerge, because they are nothing but a reaffirmation of the partitioning principles foundational for a functional economy<p>Yes. The combination of decentralized operation, variable trust, and non-repudiation is very attractive. The world already works like that. There are easily 15,000+ banks globally and countless more businesses that maintain various ledgers. That's hardly "centralized".<p>"Centralization" is not when 15000 organizations are federated such that you can transfer funds between them. Yes, you need a banking license and audit in the local jurisdiction to be a bank. Because, as we see, some people are happy to separate other people from their funds.<p>The specific combination of well-understood architectural building blocks that make up "blockchain" is very well applicable, but nearly exclusively applicable to all-around trustless global ledger accounting problem (e.g. "coins").<p>Any other set of requirements is likely better addressed using a different combination of elements from the broad toolset that exists across the distributed systems platform landscape today. /fin<p>PS: I'm not an ideolog on this matter. If you're convinced that you need a blockchain, I'll surely help with the communication path if asked. I have the finest shovels.<p>Clemens Vasters 🇪🇺 (@clemensv) |
Ask HN: Has anyone switched from being a night owl to being a morning person? | I have. Kind of. My solution might not work for you, or it might be unacceptable for other reasons.<p>I've always been a night owl, and by always I mean that I remember anecdotes about getting up in the middle of the night of a Saturday to read or to do something, going back to when I was like 8 years old. Like you, I've realized that my mind just works better at night (but my mood is better during the day. YMMV).<p>When I was studying my first degree, and during my first job after that, I had a horrible sleep schedule. More like I didn't have a real sleep schedule. I would sleep an indefinite amount of time, between 4 and 7 hours, and then have a mini-crash each weekend, sleeping until 1 or 2 pm. During my first (part-time, mornings) job, for a few months I slept twice from 4 to 8: once at 4-8 am, then again at 4-8 pm. It was way worse than it sounds.<p>About 6 years ago I decided that I had to take care of my sleep, and I devised my life hack: <i>I would sleep just after work, instead of just before work</i>. I've been successfully doing this since then. So, the guidelines are:<p>1) go to sleep <i>as soon as possible</i>. This means going to sleep at an apparently insane time. Lately I'm going to the bed at about 6 pm.<p>2) sleep as much as you want without looking at any clock. Sometimes I get up at 1am, sometimes I get up at 3:30. Get up only when you are confident that you don't need more sleep. Don't worry, you will still have a few hours with three juicy bonuses to mental activity (completely rested + night time + silence). If you're anything like me, your productivity be tremendous.<p>3) Most chores can be done during this night time. Most notably, it's a great time to cook; just be careful about noise (avoid blenders, and if you're cleaning, avoid the vacuum cleaner). Other chores, like buying groceries or anything noisy, will have to wait until the weekend.<p>4) For obvious reasons, social life will also have to wait until weekends. You might not get a lot, with this schedule; I've never had much and I'm accustomed to that, so this is not a problem for me. But this is probably the greatest deal breaker for most people.<p>5) Follow the same sleep schedule every day, including Saturdays and Sundays. This is very important, and it requires discipline.<p>6) Ideally you should also have a job with sane hours (and finishing at the same time every day), which again is not available to everyone. Plan your meals accordingly. I have two meals a day, one just before work and another one, lighter, just after work and before going to bed. You might have a different plan, like eating once when you get up and then have lunch halfway during your work.<p>The key is: regardless of whether you are a lark or an owl, go to the bed at the same time every day, try to sleep 8 hours, and try to make them continuous. There is this myth about people having a noncontinuous schedule during the middle ages. I'm a little sceptical; but above that, I've realized that, for me, the only thing that really works is a continuous sleep.<p>Other tips:<p>1) Avoid caffeine. If you follow this schedule, getting up and needing coffee immediately will be a thing of the past. In fact, if you are still tired after 8 hours, you can just sleep 1 or 2 more, since work is still a few hours away. I've found that an excess of sugar also works against this schedule, but again, YMMV.<p>2) No alarm clocks either. You will wake up naturally when you are fully rested. Also, turn off your phone just as you arrive home.<p>3) Melatonin is your friend. You might notice that some days your body is not ready to sleep. If this happens to you, start taking melatonin about 1h before going to bed.<p>4) As much as I dislike it, exercise helps a lot. Something as simple as walking around your house with the blinds closed might be good enough; the ideal condition would be to do something during the day that tires you a little, like an 1h walk or a short visit to the gym.<p>The biggest problem is setting up the schedule. In order to do this, go to bed two or three hours <i>later</i> each day, until you are going to sleep at the desired time. I usually reserve the latest 4 or 5 days of each vacation period to do this, but of course this might not work if you don't have many days off during the year. I live in Europe, so I can afford three 10-15 periods during the year (mind your bank holidays and weekends and plan accordingly, of course).<p>I always say that the cons of this schedule are obvious, but the pros, while not obvious, are also there and they are great. No alarm clock; sleep as much as you want every day; the best hours of the day are for myself and not for the job (but you will still be very productive at work since you will be well rested); and I get easy access to some underrated pleasures like going to the beach just before dawn. |
Ask HN: How to not be intimidated by people with higher credentials? | Keep it all about the goals. Where you are strong, don't hold back. Nail it.<p>Where you need help, ask for it unabashedly. Where you can give help, give it unabashedly.<p>Put real time, some fraction of your personal time, into continuing to be relevant. This will give you skills you need, and things, projects, interests to talk about and show off.<p>Once, I got a call from a friend: "I'm the dumbest guy here." Reframed that, with a quick reminder about what smarts are, what wisdom is.<p>What he really meant was he was the least experienced person there. I told him, "good, you are gonna grow a whole lot." And he did, then moved on to something else after finding that particular role wasn't for him. And that's not inability to perform either. He just felt more strongly for something else.<p>The truth is, most of us are smart enough. You are, and being here to ask about this in the way you just did shows it too. The rest is work, and a chunk of that work is just caring enough about the things that matter. Time spent sorting that, from things that probably don't is high value.<p>From there, the amount of that work depends on a whole lot of things too. We all have the choice to either do it or not.<p>So do the work. It will show.<p>All of that will net you the advantage or at least garner favor, respect, a very significant percentage of the time. That's really as good as it all ever gets.<p>Say you did have a degree. What would you change about these basic dynamics? I've been through this process a few times now, and have learned it's not worth changing a thing. Attacking your worries this way can work. Worked and continues to work for me.<p>Play very strong in your lane too. Whatever that lane is, make sure you've got your basic priorities in order.<p>Others, who should be playing in theirs, sometimes don't. Sometimes you won't either. Stand ready to help, and let that be known you treat people on your team right. Nobody wants to fail. It just costs everyone.<p>Be flexible in these things, and I'm speaking to that help we all might need, or perspective we want to share, or help we may need to give.<p>An example of what I'm trying to convey here might be past experiences. Right now, I'm in a technical service, support position. It's technically an executive role, but the company is small, so mostly it's a lot of work and looking hard to make the right calls consistently right now, model the future, because it's coming soon kind of thing.<p>This means putting out a lot of fires, saving asses. I've had a number of roles, and gladly share those experiences with people who hold them now. IT, training, consulting, manufacturing, engineering...<p>Others know this, and know I'm there to advise, help, share, and expect the same in return. After a time, once the team solidifies, begins to really run well, everyone who modeled these things are well respected, own the culture, and newbies as they come in can tell all of that, and the ones who get it, seek that, and their own place.<p>What typically happens is people will come and find me to talk. What they really want to do is think through or past something. Great, let's do that and find out what we think makes sense and act on it.<p>Often, I think of this as working with people, mutual respect, mutual consideration. Has paid off for me many, many times, and yes! Among people with credentials far greater than mine. Sometimes, it mattered a little early on. Maybe a proof or two, or some deft management of office dynamics were needed. Nothing too tough, unless I made it tough. Sometimes this is just automatic, rote behavior, the product of old, or shallow norms in play in that particular organization.<p>In most of the cases, I got the opportunity to really help, or add value and just did, no worries. The trick is to be there, not judgmental, just observant, ready.
Doesn't take many of those for most people to get what you are about, and the moment they get it, they value you differently from that point on.<p>Honestly, this is human work any of us has to do. And some of us learn better by doing, or when focused. I am one of those, and always have been. You probably are too.<p>Just know there is room for all of us, and by that I mean diverse teams, broad in experience, deep in skills, and who communicate are strong, likely successful teams. Most people get that. A few won't, and that's true for all of us no matter our credential status or other.<p>What I've come to learn, and I'm later in my working career now, is the cost comes down to making those personal investments needed to just nail whatever we do, and to manage the tougher case people away from being problems you, nor your team need.<p>Good mentors help too. I've had those off and on, and I can't understate the value of both being one and having one. Most became friends, and we talk regularly, those conversations have run for years now, decades, transcending what role we are in, or which company we are at, or starting. They are about the journey, challenges, experiences, wins, losses, the stories. They are about making those good, or the best of stories as much as we can manage.<p>For some perspective, I am a mentor for a guy headed into a post grad program. Wrote my first letter of consideration. Just paying the good I got forward. He's headed into an awesome career. It's gonna be great to keep in touch.<p>You will experience these things too, given you seek them.
I very strongly encourage you to do that.<p>Also, laugh. It helps, and it's catchy. |
100 prisoners problem | Say you have 10 players in a row. You number each player by its position in the
row: 1, 2, 3, .., 10. You then write the players' names on 10 pieces of paper,
and you randomly place the papers in 10 boxes. You place exactly one paper in
each of the boxes. The boxes are also numbered from 1 to 10.<p>You allow each player to look into half of the boxes. Each player can choose
which boxes to look into. The player "wins" if they find a paper with their
number written on it. But they can't tell anyone if they won or lost, what they
saw in the boxes, or which boxes they picked.<p>The problem then, is the following: Which boxes should each player look into,
such that the chance of all of them "winning" at the same time is as high as
possible.<p>For example, one way to put the 10 numbers in the 10 boxes is:<p>Box 1: 10<p>Box 2: 9<p>Box 3: 8<p>Box 4: 4<p>Box 5: 6<p>Box 6: 5<p>Box 7: 7<p>Box 8: 3<p>Box 9: 2<p>Box 10: 1<p>The ordering of the numbers in the boxes is a permutation of the 10 numbers.
We know that there are 10! possible ways to place the numbers in the boxes.<p>Now, consider the following strategy that player X can use to open his 5 boxes.
He first opens the box numbered X. If he sees his number in it, then he wins.
Otherwise, he saw a different number Y. He then opens the box numbered Y. If he
finds his number, he wins. Otherwise, he continues until he has opened the 5
boxes he is allowed to open.<p>If the player was allowed to open as many boxes as he likes, they he would
always win. That is because he would either find his number in a box, or
continue opening boxes he hasn't opened before. Why is this true ? Well, let's
say it isn't, and that he opens boxes 3 -> 2 -> 5 -> 7 -> 5. But that cannot
happen: The first time he opened box 5 was because he saw the number 5 in box 2.
And if he had to see box 5 again, this means that box 7 also had the number 5 in
it. In fact, the only way player X could go back to a box he has seen, is when
he finds his number (X) that will lead him back to the first box he opened.<p>Now, we know that a player "wins" if he finds his number before opening more
than 5 boxes. Or, put it differently, if the loop he follows has length at most
5. In the example above, player 2 would open box 2, which would lead him to box
9, which contains number 2, forming a loop of length 2.<p>So, when do all players "win" at the same time ? When there does not exist a
loop that has length 6 or more. Or more specifically, there is no loop of length
6, or 7, or 8, or 9, or 10.
So: P_win = 1 - P_loop(6) - P_loop(7) - P_loop(8) - P_loop(9) - P_loop(10).<p>We can show [1] that P_loop(k) = 1/k, when k > 5. We need the loop to be larger
than 5 because in this case, there can only be one such loop on the permutation.<p>But then P_win would be 1 - 1/6 - 1/7 - 1/8 - 1/9 - 1/10 or ~35.4%. In general,
if we have N players and N boxes then P_win would be:
P_win = 1 - Sum[k=n/2+1 .. n]P_loop(k)
or
P_win = 1 - ( Sum[k=1 .. N]P_loop(k) - Sum[k=1 .. N/2]P_loop(k) )
or
P_win ~= 1 - ( ln(N) - ln(N/2) ) > ~30%<p>(each sum is a harmonic series sum.)<p>[1] Why is P_loop(k) = 1/k when k > N/2 though ?<p>Let's say you have numbers 1, 2, .., k. All permutations of them are k!.
How many of those k! permutations form a loop ? Well, at the first position,
you can place any number other than '1', so you have (k-1) choices. Say you
placed number Z at position 1. Then at position Z, you can't place '1' and you
can't place 'Z', so you are left with (k-2) choices. In the last position, you
have to place number '1' to close the loop, so you have no choice. So, you have
(k-1)(k-2)(k-3)...1 ways to form a loop, or (k-1)!.<p>Therefore, with a little bit of squinting, we can see that the probability that
a random permutation has a loop on it of length exactly k is (k-1)!/k!, or 1/k.<p>We can formalize this argument as follows: There are (N choose k) ways to pick the
k places on the permutation where there exists a cycle. There are (k-1)! valid
ways to form a cycle on these k places. There are (N-k)! ways to arrange the
remaining elements on the permutation. There are N! permutations, Therefore,
P_loop(k, N) =
(N choose k)<i>(N-k)!</i>(k-1)! / N! = N! / k! / (N-k)! * (N-k)! * (k-1)! / N! = 1/k |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.