id
int64 10.8M
42.1M
| by
large_stringlengths 2
15
| time
timestamp[us] | title
large_stringlengths 1
95
⌀ | text
large_stringclasses 0
values | url
large_stringlengths 12
917
| score
int64 1
5.77k
| descendants
int64 0
2.51k
⌀ | kids
large listlengths 1
472
⌀ | deleted
large listlengths | dead
bool 1
class | scraping_error
large_stringclasses 1
value | scraped_title
large_stringlengths 1
59.3k
| scraped_published_at
large_stringlengths 4
66
⌀ | scraped_byline
large_stringlengths 1
757
⌀ | scraped_body
large_stringlengths 600
50k
| scraped_at
timestamp[us] | scraped_language
large_stringclasses 50
values | split
large_stringclasses 1
value |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10,820,498 | trengrj | 2016-01-01T00:16:04 | Debian creator Ian Murdock dies at 42 | null | http://www.zdnet.com/article/debian-linux-founder-ian-murdock-dies-at-42-cause-unknown/ | 1 | null | null | null | true | no_error | Debian Linux founder Ian Murdock dies at 42, cause unknown | null | Written by | UPDATED: I'd known Ian Murdock, founder of Debian Linux and most recently a senior Docker staffer, since 1996. He died this week much too young, 42, in unclear circumstances. Ian Murdock backed away from saying he would commit suicide in later tweets, but he continued to be seriously troubled by his experiences and died later that night. No details regarding the cause of his death have been disclosed. In a blog posting, Docker merely stated that: "It is with great sadness that we inform you that Ian Murdock passed away on Monday night. This is a tragic loss for his family, for the Docker community, and the broader open source world; we all mourn his passing."The San Francisco Police Department said they had nothing to say about Murdock's death at this time. A copy of what is reputed to be his arrest record is all but blank.Sources close to the police department said that officers were called in to responded to reports of a man, Ian Murdock, trying to break into a home at the corner of Steiner and Union St at 11.30pm on Saturday, December 26. Murdock was reportedly drunk and resisted arrest. He was given a ticket for two counts of assault and one for obstruction of an officer. An EMT treated an abrasion on his forehead at the site, and he was taken to a hospital.At 2:40 AM early Sunday morning, December 27, he was arrested after banging on the door of a neighbor in the same block. It is not clear if he was knocking on the same door he had attempted to enter earlier. A medic treated him there for un-described injuries. Murdock was then arrested and taken to the San Francisco county jail. On Sunday afternoon, Murdock was bailed out with a $25,000 bond.On Monday afternoon, December 28, the next day, Murdock started sending increasingly erratic tweets from his Twitter account. The most worrying of all read: "i'm committing suicide tonight.. do not intervene as i have many stories to tell and do not want them to die with me"At first people assumed that his Twitter account had been hacked. Having known Murdock and his subsequent death, I believe that he was the author of these tweets.His Twitter account has since been deleted, but copies of the tweets remain. He wrote that: "the police here beat me up for knowing [probably an auto-correct for "knocking"] on my neighbor's door.. they sent me to the hospital."I have been unable to find any San Francisco area hospital with a record of his admission. Murdock wrote that he had been assaulted by the police, had his clothes ripped off, and was told, "We're the police, we can do whatever the fuck we want." He also wrote: "they beat the shit out of me twice, then charged me $25,000 to get out of jail for battery against THEM."Murdock also vented his anger at the police."(1/2) The rest of my life will be devoted to fighting against police abuse.. I'm white, I made $1.4 million last year, (2/2) They are uneducated, bitter, and and only interested in power for its own sake. Contact me [email protected] if you can help. -ian"After leaving the courtroom, presumably a magistrate court, Murdock tweeted that he had been followed home by the police and assaulted again. He continued: "I'm not committing suicide today. I'll write this all up first, so the police brutality ENDEMIC in this so call free country will be known." He added, "Maybe my suicide at this, you now, a successful business man, not a N****R, will finally bring some attention to this very serious issue."His last tweet stated: "I am a white male, make a lot money, pay a lot of money in taxes, and yet their abuse is equally doned out. DO NOT CROSS THEM!?"He appears to have died that night, Monday, December 28. At the time of this writing, the cause of death still remains unknown.His death is a great loss to the open-source world. He created Debian, one of the first Linux distributions and still a major distro; he also served as an open-source leader at Sun; as CTO for the Linux Foundation, and as a Docker executive. He will be missed.This story has been updated with details about Murdock's arrest.Related Stories:Not a typo:Microsoft is offering a Linux certificationDebian GNU/Linux now supported on Microsoft's AzureWhat's what in Debian Jessie | 2024-11-08T20:50:45 | en | train |
10,820,620 | BuckRogers | 2016-01-01T00:52:20 | Where are we in the Python 3 transition? | null | http://www.snarky.ca/the-stages-of-the-python-3-transition | 4 | 0 | null | null | null | no_error | Where are we in the Python 3 transition? | 2015-12-31T04:35:00.000Z | Brett Cannon |
Dec 30, 2015
3 min read
Python
The Kübler-Ross model outlines the stages that one goes through in dealing with death:
Denial
Anger
Bargaining
Depression
Acceptance
This is sometimes referred to as the five stages of grief.Some have jokingly called them the five stages of software development. I think it actually matches the Python community's transition to Python 3 rather well, both what has occurred and where we currently are (summary: the community is at least in stage 4 with some lucky to already be at the end in stage 5).
Denial
When Python 3 first came out and we said Python 2.7 was going to be the last release of Python 2, I think some people didn't entirely believe us. Others believed that Python 3 didn't offer enough to bother switching to it from Python 2, and so they ignored Python 3's existence. Basically the Python development team and people willing to trust that Python 3 wasn't some crazy experiment that we were going to abandon, ported their code to Python 3 while everyone else waited.
Anger
When it became obvious that the Python development team was serious about Python 3, some people got really upset. There were accusations of us not truly caring about the community and ignoring that the transition was hurting the community irreparably. This was when whispers of forking Python 2 to produce a Python 2.8 release came about, although that obviously never occurred.
Bargaining
Once people realized that being mad about Python 3 wasn't going to solve anything, the bargaining began. People came to the Python development team asking for features to be added to Python 3 to make transitioning easier such as bringing back the u string prefix in Python 3. People also made requests for exceptions to Python 2's "no new features" policy which were also made to allow for Python 2 to stay a feasible version of Python longer while people transitioned (this all landed in Python 2.7.9). We also extended the maintenance timeline of Python 2.7 from 5 years to 10 years to give people until 2020 to transition before people will need to pay for Python 2 support (as compared to the free support that the Python development team has provided).
Depression
7 years into the life of Python 3, it seems a decent amount of people have reached the point of depression about the transition. With Python 2.7 not about to be pulled out from underneath them, people don't feel abandoned by the Python development team. Python 3 also has enough new features that are simply not accessible from Python 2 that people want to switch. And with porting Python 2 code to run on Python 2/3 simultaneously heavily automated and being doable on a per-file basis, people no longer seem to be adverse to porting their code like they once were (although it admittedly still takes some effort).
Unfortunately people are running up against the classic problem of lacking buy-in from management. I regularly hear from people that they would switch if they could, but their manager(s) don't see any reason to switch and so they can't (or that they would do per-file porting, but they don't think they can convince their teammates to maintain the porting work). This can be especially frustrating if you use Python 3 in personal projects but are stuck on Python 2 at work. Hopefully Python 3 will continue to offer new features that will eventually entice reluctant managers to switch. Otherwise financial arguments might be necessary in the form of pointing out that porting to Python 3 is a one-time cost while staying on Python 2 past 2020 will be a perpetual cost for support to some enterprise provider of Python and will cost more in the long-term (e.g., paying for RHEL so that someone supports your Python 2 install past 2020). Have hope, though, that you can get buy-in from management for porting to Python 3 since others have and thus reached the "acceptance" stage.
Acceptance
While some people feel stuck in Python 2 at work and are "depressed" over it, others have reached the point of having transitioned their projects and accepted Python 3, both at work and in personal projects. Various numbers I have seen this year suggest about 20% of the scientific Python community and 20% of the Python web community have reached this point (I have yet to see reliable numbers for the Python community as a whole; PyPI is not reliable enough for various reasons). I consistently hear from people using Python 3 that they are quite happy; I have yet to hear from someone who has used Python 3 that they think it is a worse language than Python 2 (people are typically unhappy with the transition process and not Python 3 itself).
With five years left until people will need to pay for Python 2 support, I'm glad that the community seems to have reached either the "depression" or "acceptance" stages and has clearly moved beyond the "bargaining" stage. Hopefully in the next couple of years, managers across the world will realize that switching to Python 3 is worth it and not as costly as they think it is compared to having to actually pay for Python 2 support and thus more people will get to move to the "acceptance" stage.
| 2024-11-08T12:09:11 | en | train |
10,820,781 | jonbaer | 2016-01-01T01:54:49 | Algorithms of the Mind – What Machine Learning Teaches Us About Ourselves | null | https://medium.com/deep-learning-101/algorithms-of-the-mind-10eb13f61fc4#.hzuheczet | 3 | 0 | null | null | null | no_error | Algorithms of the Mind - Deep Learning 101 - Medium | 2015-05-22T09:27:31.481Z | Christopher Nguyen | What Machine Learning Teaches Us About Ourselves“Science often follows technology, because inventions give us new ways to think about the world and new phenomena in need of explanation.”Or so Aram Harrow, an MIT physics professor, counter-intuitively argues in “Why now is the right time to study quantum computing”.He suggests that the scientific idea of entropy could not really be conceived until steam engine technology necessitated understanding of thermodynamics. Quantum computing similarly arose from attempts to simulate quantum mechanics on ordinary computers.So what does all this have to do with machine learning?Much like steam engines, machine learning is a technology intended to solve specific classes of problems. Yet results from the field are indicating intriguing—possibly profound—scientific clues about how our own brains might operate, perceive, and learn. The technology of machine learning is giving us new ways to think about the science of human thought … and imagination.Not Computer Vision, But Computer ImaginationFive years ago, deep learning pioneer Geoff Hinton (who currently splits his time between the University of Toronto and Google) published the following demo.Hinton had trained a five-layer neural network to recognize handwritten digits when given their bitmapped images. It was a form of computer vision, one that made handwriting machine-readable.But unlike previous works on the same topic, where the main objective is simply to recognize digits, Hinton’s network could also run in reverse. That is, given the concept of a digit, it can regenerate images corresponding to that very concept.We are seeing, quite literally, a machine imagining an image of the concept of “8”.The magic is encoded in the layers between inputs and outputs. These layers act as a kind of associative memory, mapping back-and-forth from image and concept, from concept to image, all in one neural network.“Is this how human imagination might work?But beyond the simplistic, brain-inspired machine vision technology here, the broader scientific question is whether this is how human imagination — visualization — works. If so, there’s a huge a-ha moment here.After all, isn’t this something our brains do quite naturally? When we see the digit 4, we think of the concept “4”. Conversely, when someone says “8”, we can conjure up in our minds’ eye an image of the digit 8.Is it all a kind of “running backwards” by the brain from concept to images (or sound, smell, feel, etc.) through the information encoded in the layers? Aren’t we watching this network create new pictures — and perhaps in a more advanced version, even new internal connections — as it does so?On Concepts and IntuitionsIf visual recognition and imagination are indeed just back-and-forth mapping between images and concepts, what’s happening between those layers? Do deep neural networks have some insight or analogies to offer us here?Let’s first go back 234 years, to Immanuel Kant’s Critique of Pure Reason, in which he argues that “Intuition is nothing but the representation of phenomena”.Kant railed against the idea that human knowledge could be explained purely as empirical and rational thought. It is necessary, he argued, to consider intuitions. In his definitions, “intuitions” are representations left in a person’s mind by sensory perceptions, where as “concepts” are descriptions of empirical objects or sensory data. Together, these make up human knowledge.Fast forwarding two centuries later, Berkeley CS professor Alyosha Efros, who specializes in Visual Understanding, pointed out that “there are many more things in our visual world than we have words to describe them with”. Using word labels to train models, Efros argues, exposes our techniques to a language bottleneck. There are many more un-namable intuitions than we have words for.There is an intriguing mapping between ML Labels and human Concepts, and between ML Encodings and human Intuitions.In training deep networks, such as the seminal “cat-recognition” work led by Quoc Le at Google/Stanford, we’re discovering that the activations in successive layers appear to go from lower to higher conceptual levels. An image recognition network encodes bitmaps at the lowest layer, then apparent corners and edges at the next layer, common shapes at the next, and so on. These intermediate layers don’t necessarily have any activations corresponding to explicit high-level concepts, like “cat” or “dog”, yet they do encode a distributed representation of the sensory inputs. Only the final, output layer has such a mapping to human-defined labels, because they are constrained to match those labels.“Is this Intuition staring at us in the face?Therefore, the above encodings and labels seem to correspond to exactly what Kant referred to as “intuitions” and “concepts”.In yet another example of machine learning technology revealing insights about human thought, the network diagram above makes you wonder whether this is how the architecture of Intuition — albeit vastly simplified — is being expressed.The Sapir-Whorf ControversyIf — as Efros has pointed out — there are a lot more conceptual patterns than words can describe, then do words constrain our thoughts? This question is at the heart of the Sapir-Whorf or Linguistic Relativity Hypothesis, and the debate about whether language completely determines the boundaries of our cognition, or whether we are unconstrained to conceptualize anything — regardless of the languages we speak.In its strongest form, the hypothesis posits that the structure and lexicon of languages constrain how one perceives and conceptualizes the world.Can you pick the odd one out? The Himba — who have distinct words for the two shades of green — can pick it out instantly. Credit: Mark Frauenfelder, How Language Affects Color Perception, and Randy MacDonald for verifying the RGB’s.One of the most striking effects of this is demonstrated in the color test shown here. When asked to pick out the one square with a shade of green that’s distinct from all the others, the Himba people of northern Namibia — who have distinct words for the two shades of green — can find it almost instantly.The rest of us, however, have a much harder time doing so.The theory is that — once we have words to distinguish one shade from another, our brains will train itself to discriminate between the shades, so the difference would become more and more “obvious” over time. In seeing with our brain, not with our eyes, language drives perception.“We see with our brains, not with our eyes.With machine learning, we also observe something similar. In supervised learning, we train our models to best match images (or text, audio, etc.) against provided labels or categories. By definition, these models are trained to discriminate much more effectively between categories that have provided labels, than between other possible categories for which we have not provided labels. When viewed from the perspective of supervised machine learning, this outcome is not at all surprising. So perhaps we shouldn’t be too surprised by the results of the color experiment above, either. Language does indeed influence our perception of the world, in the same way that labels in supervised machine learning influence the model’s ability to discriminate among categories.And yet, we also know that labels are not strictly required to discriminate between cues. In Google’s “cat-recognizing brain”, the network eventually discovers the concept of “cat”, “dog”, etc. all by itself — even without training the algorithm against explicit labels. After this unsupervised training, whenever the network is fed an image belonging to a certain category like “Cats”, the same corresponding set of “Cat” neurons always gets fired up. Simply by looking at the vast set of training images, this network has discovered the essential patterns of each category, as well as the differences of one category vs. another.In the same way, an infant who is repeatedly shown a paper cup would soon recognize the visual pattern of such a thing, even before it ever learns the words “paper cup” to attach that pattern to a name. In this sense, the strong form of the Sapir-Whorf hypothesis cannot be entirely correct — we can, and do, discover concepts even without the words to describe them.Supervised and unsupervised machine learning turn out to represent the two sides of the controversy’s coin. And if we recognized them as such, perhaps Sapir-Whorf would not be such a controversy, and more of a reflection of supervised and unsupervised human learning.I find these correspondences deeply fascinating — and we’ve only scratched the surface. Philosophers, psychologists, linguists, and neuroscientists have studied these topics for a long time. The connection to machine learning and computer science is more recent, especially with the advances in big data and deep learning. When fed with huge amounts of text, images, or audio data, the latest deep learning architectures are demonstrating near or even better-than-human performance in language translation, image classification, and speech recognition.Every new discovery in machine learning demystifies a bit more of what may be going on in our brains. We’re increasingly able to borrow from the vocabulary of machine learning to talk about our minds. | 2024-11-08T14:18:52 | en | train |
10,820,785 | jonbaer | 2016-01-01T01:55:57 | Demystifying Deep Reinforcement Learning | null | http://www.nervanasys.com/demystifying-deep-reinforcement-learning/ | 3 | 0 | null | null | null | no_error | KINGGACOR | Situs Slot Gacor Hari Ini Slot PG Soft Maxwin di Indonesia | null | null |
KINGGACOR | Situs Slot Gacor Hari Ini Slot PG Soft Maxwin di Indonesia
KINGGACOR adalah salah satu situs slot online terpercaya di Indonesia dengan platform slot gacor hari ini dan menawarkan berbagai jenis permainan slot melalui provider terpercaya seperti PG Soft dan Pragmatic Play. Kinggacor telah menjadi pilihan utama para pemain slot gacor dikarenakan RTP slot yang tinggi untuk memberikan peluang kemenangan yang lebih besar kepada pemain. Berbagai jenis slot online tersedia di kinggacor seperti dengan berbagai jenis tema permainan yang dipadukan animasi game yang memukau dan memiliki bonus promosi yang dapat membantu pemain meraih kemenangan maxwin. Didukung dengan artificial intelligence yang dimiliki kinggacor, pemain akan dibantu untuk bisa memenangi game slot setiap saat.
Kinggacor bekerjasama dengan PG Soft untuk memberikan pengalman pemain terbaik kepada seluruh pemain slot gacor, provider slot thailand terkenal yang sudah terbukti menghasilkan banyak kemenangan besar bagi para pemain. PG Soft memiliki berbagai game-game slot berkualitas tinggi yang tidak hanya menawarkan tampilan visual menarik tetapi juga fitur-fitur seperti Free Spins, Wilds, dan fitur bonus yang menguntungkan pemain slot. Berbagai keuntungan yang diberikan PG Soft menjadikan mereka sebagai provider slot gacor online yang paling dicari dan dimainkan di Indonesia saat ini.
5 Game Slot Online PG Soft Paling Diminati Hari Ini
PG Soft nerupakan provider Slot Gacor nomor 1 di Indonesia hari ini. Dengan berbagai keunggulan yang dimiliki, PG Soft kerap memberikan inovasi dan menciptakan permainan slot baru secara rutin. Berikut beberpa game slot gacor online terkenal yang dimiliki oleh Platform slot PG Soft :
Slot Gacor PG Soft Mahjong Ways 2
Slot Gacor PG Soft Caishen Wins
Slot Gacor PG Soft Ganesha Fortune
Slot Gacor PG Soft Treasure of Aztec
Slot Gacor Pg Soft Lucky Neko
Kuantitas | 2024-11-08T08:38:29 | id | train |
10,820,925 | Someone | 2016-01-01T02:51:11 | Users No Longer Need to Jailbreak Apple iOS to Load Rogue Apps | null | http://www.darkreading.com/vulnerabilities---threats/users-no-longer-need-to-jailbreak-apple-ios-to-load-rogue-apps/d/d-id/1323726 | 2 | 0 | null | null | null | no_error | Users No Longer Need to Jailbreak Apple iOS To Load Rogue Apps | 2015-12-29T17:00:00.000Z | Ericka Chickowski, Contributing Writer | Security practitioners who've counted on the protection of Apple App Store's walled garden approach now have something new to worry about: rogue app marketplaces are now using stolen enterprise certificates to allow users with even non-jailbroken iPhones and iPads to download applications through unapproved channels. Researchers from Proofpoint have dubbed the process used by these types of rogue app stores as "DarkSideLoaders." In their research, they pointed to one marketplace in particular, vShare, as an example of those using DarkSideLoader methods. Advertising one million apps available for iPhones and iPads, including pirated paid apps available for free, vShare in past years has catered to Android and jailbroken iOS devices. However, the game has now changed for this marketplace as it has figured out how to "sideload" applications, or circumvent the Apple App Store or legitimate app stores, into non-jailbroken iOS devices.Rogue app stores are doing this by signing their apps with Enterprise App distribution certificates issued by Apple."These certificates are normally issued to enterprises that want to operate their own internal app stores for employees," the researchers wrote. "A rogue app marketplace using the DarkSideLoader technique has implemented a large scale app re-signing capability. Legitimate games and other apps are decrypted, modified, and re-signed with an enterprise certificate for download by users of the rogue app marketplace."This capability puts enterprises at risk when their employees start loading applications from these unauthorized app stores."These apps can make use of private iOS APIs to access operating system functions that would not be permitted by apps that have been vetted by Apple for publishing on the official app store," Proofpoint researchers said.The biggest risk to enterprises, of course, is that these unauthorized apps are used as vehicles to carry known or zero-day vulnerabilities that will allow the app maker to compromise the device. Security experts have long warned about the dangers of jailbreaking devices in order to sideload devices due to the high prevalence of malicious mobile devices lurking in these types of marketplaces. Attackers load attractive applications--such as pirated popular games or productivity applications--with remote access trojans (RATs) that can be used to infiltrate corporate networks when infected devices connect to them."The vShare marketplace is noteworthy in that it is accessible to iOS devices connecting from anywhere in the world, representing a global expansion of this attack technique," wrote the researchers. "This technique also makes it possible to load onto the iOS devices configuration profiles that would allow an attacker to configure VPN settings to redirect network traffic to their man-in-the-middle nodes, as well as change various OS settings."About the AuthorEricka Chickowski specializes in coverage of information technology and business innovation. She has focused on information security for the better part of a decade and regularly writes about the security industry as a contributor to Dark Reading. | 2024-11-08T07:55:51 | en | train |
10,820,938 | pavornyoh | 2016-01-01T02:56:54 | In 2015, promising surveillance cases ran into legal brick walls | null | http://arstechnica.com/tech-policy/2015/12/in-2015-promising-surveillance-cases-ran-into-legal-brick-walls/ | 48 | 6 | [
10833956,
10834045
] | null | null | no_error | In 2015, promising surveillance cases ran into legal brick walls | 2015-12-31T16:00:25+00:00 | Cyrus Farivar |
Attorneys everywhere are calling things moot after the phone metadata program ended.
Today, the first Snowden disclosures in 2013 feel like a distant memory. The public perception of surveillance has changed dramatically since and, likewise, the battle to shape the legality and logistics of such snooping is continually evolving.
To us, 2015 appeared to be the year where major change would happen whether pro- or anti-surveillance. Experts felt a shift was equally imminent. "I think it's impossible to tell which case will be the one that does it, but I believe that, ultimately, the Supreme Court will have to step in and decide the constitutionality of some of the NSA's practices," Mark Rumold, an attorney with the Electronic Frontier Foundation, told Ars last year.
The presumed movement would all start with a lawsuit filed by veteran conservative activist Larry Klayman. Filed the day after the initial Snowden disclosures, his lawsuit would essentially put a stop to unchecked NSA surveillance. In January 2015, he remained the only plaintiff whose case had won when fighting for privacy against the newly understood government monitoring. (Of course, it was a victory in name only—the judicial order in Klayman was stayed pending the government’s appeal at the time).
With January 2016 mere hours away, however, the significance of Klayman is hard to decipher. The past year saw an end to the phone metadata program authorized under Section 215 of the USA Patriot Act, but it also saw the government flex its surveillance muscle in other ways, maintaining or establishing other avenues to keep its fingers on the pulse. That activity dramatically impacted Klayman and other cases we anticipated shaping surveillance in 2015, and we can admit our optimism was severely dashed. In total, zero of the cases we profiled last January got anywhere close to the nine Supreme Court justices in the last 12 months.
Tomorrow we'll bring you five new (and hopefully more active) cases that we’ve got our eye on for 2016, but let’s review what’s happened to our 2015 list first.
The grandaddy of them all
Case name: Klayman v. Obama
Status: Pending at the District of Columbia Circuit Court of Appeals for the second time.
This case is notable for two reasons. First, it was filed the day after the first published disclosures from the Snowden leaks. Second, the case marks a rare win against the government.
US District Judge Richard Leon ruled in favor of plaintiff and attorney Larry Klayman in December 2013, ordering that the NSA’s Bulk Telephony Metadata Program be immediately halted. However, he famously stayed his order pending an appeal to the District of Columbia Circuit Court of Appeals. The DC Circuit reversed his order in August 2015 and sent it back down to Judge Leon. The DC circuit found (as has often been the case) that Klayman did not have standing as there was not enough evidence that his records had been collected.
Judge Leon next suggested that the case be amended to include a specific plaintiff that had been a customer of Verizon Business Services, not Verizon Wireless. That person, California lawyer J.J. Little, was soon found and added to the case. The judge then ruled on November 9, 2015 that the government be ordered to immediately stop collecting Little’s records. As Judge Leon wrote:
With the Government’s authority to operate the Bulk Telephony Metadata Program quickly coming to an end, this case is perhaps the last chapter in the Judiciary’s evaluation of this particular Program’s compatibility with the Constitution. It will not, however, be the last chapter in the ongoing struggle to balance privacy rights and national security interests under our Constitution in an age of evolving technological wizardry. Although this Court appreciates the zealousness with which the Government seeks to protect the citizens of our Nation, that same Government bears just as great a responsibility to protect the individual liberties of those very citizens.
The government again appealed the decision back to the District of Columbia Circuit Court of Appeals. Weeks later though, the phone metadata program authorized under Section 215 of the USA Patriot Act ended on November 29, 2015. As such, the government said in December 2015 that it will soon formally appeal Judge Leon’s decision, largely on the basis that it’s now moot.
Phone metadata fallout
Case name: ACLU v. Clapper
Status: Sent back down to the Southern District of New York, likely to be dismissed as moot
In a landmark May 2015 decision, the 2nd Circuit Court of Appeals ruled that the bulk telephone metadata program was not authorized by Section 215 of the Patriot Act. Again, that program halted shortly after in November 2015. Today it’s likely that the lower court will soon dismiss the case as moot.
"The statutes to which the government points have never been interpreted to authorize anything approaching the breadth of the sweeping surveillance at issue here," the appeals court wrote last spring.
At the time, the court also noted that the Patriot Act gives the government wide powers to acquire all types of private records on Americans as long as they are "relevant" to an investigation. But according to the court, the government is going too far when it comes to acquiring, via a subpoena, the metadata of every telephone call made to and from the United States.
As 2nd Circuit judges concluded:
The records demanded are not those of suspects under investigation, or of people or businesses that have contact with such subjects, or of people or businesses that have contact with others who are in contact with the subjects—they extend to every record that exists, and indeed to records that do not yet exist, as they impose a continuing obligation on the recipient of the subpoena to provide such records on an ongoing basis as they are created. The government can point to no grand jury subpoena that is remotely comparable to the real‐time data collection undertaken under this program.
After the 2nd Circuit, the case was sent back down to the Southern District of New York, which has yet to schedule any arguments in this case for 2016.
Ridiculously slow
Case name: First Unitarian Church v. National Security Agency
Status: Pending in Northern District Court of California
Unlike Klayman and similar cases, First Unitarian Church v. National Security Agency was filed in 2013 on behalf of a number of wide-ranging religious and non-profit groups. This collective runs the gamut, representing Muslims, gun owners, marijuana legalization advocates, and even the Free Software Foundation. In total, the suit represents the broadest challenge to the metadata collection program so far.
First Unitarian Church takes the bulk collection of data and questions how it may reveal an individual's associations:
Plaintiffs’ associations and political advocacy efforts, as well as those of their members and staffs, are chilled by the fact that the Associational Tracking Program creates a permanent record of all of Plaintiffs’ telephone communications with their members and constituents, among others.
The plaintiffs demands that the metadata program be declared unconstitutional and formally shut down. In the latest chapter, the plaintiffs’ attempt to hold a court hearing regarding their attempt for summary judgment was denied in December 2015.
Overall within this past year, the docket only advanced slightly. Oakland-based US District Judge Jeffrey White did not hold a single hearing in the case, and nothing is scheduled so far for 2016. Like the previous two cases to watch from 2015, this case is also likely to be dismissed as moot given that the phone metadata program under Section 215 is no longer operational.
NSA snoops on a cab driver
Case name: United States v. Moalin
Status: Convicted in Southern District Court of California, appeal pending in 9th Circuit Court of Appeals
As is proven time and time again, the wheels of justice often turn quite slowly. Last year, we guessed that the 9th Circuit Court of Appeals would hear oral arguments in the only criminal case where the government is known to have used phone metadata collection to prosecute a terrorism-related case. It didn’t.
Most of the appellate case’s docket in 2015 was taken up by new lawyers being added to the case and extensions of time. Finally on December 14, 2015, lawyers for Basaaly Moalin and his three co-conspirators filed their opening 258-page brief.
United States v. Basaaly Saeed Moalin involves a Somali taxi driver who was convicted in a San Diego federal court in February 2013 on five counts. The counts include conspiracy to provide material support ($8,500) to the Somali terrorist group Al Shabaab, and Moalin was sentenced in November 2013 to 18 years in prison.
At congressional hearings in June 2013, FBI Deputy Director Sean Joyce testified that under Section 215, the NSA discovered Moalin indirectly conversing with a known terrorist overseas. However, the case was domestic and the FBI took over at that point. They began intercepting 1,800 phone calls over hundreds of hours from December 2007 to December 2008. The agency got access to hundreds of e-mails from Moalin’s Hotmail account, and this access was granted after the government applied for a court order at the FISC.
Attorney Joshua Dratel
Credit:
Aurich Lawson
Though Moalin was arrested in December 2010, attorney Joshua Dratel (yes, the same attorney representing Ross Ulbricht) did not learn of the NSA’s involvement until well after his client’s conviction. Dratel challenged the validity of the spying in court, requesting that the court compel the government to produce the FBI’s wiretap application to the FISC. The government responded with a heavily redacted 60-page brief, essentially arguing that since case involved national security issues this information could not be revealed.
In the appeal to the 9th Circuit, Moalin and the other co-defendants “deny that was the purpose for which the funds were intended. Rather, the funds, consistent with contributions by the San Diego Somali community for all the years it has existed in the US, were designed to provide a
regional Somali administration with humanitarian assistance relating to drought relief, educational services, orphan care, and security.”
Moalin’s legal team—which includes top lawyers from the American Civil Liberties Union—argue forcefully that the court heed the ruling in ACLU v. Clapper.
As it has often done, the government relied upon a legal theory known as the third-party doctrine. This emanated from a 1979 Supreme Court decision, Smith v. Maryland, where the court found that individuals do not have an inherent privacy right to data that has already been disclosed to a third party. So with telecom data for instance, the government has posited that because a call from one person to another forcibly transits Verizon’s network, those two parties have already shared that data with Verizon. Therefore, the government argues, such data can't be private, and it’s OK to collect it.
Moalin’s lawyers firmly reject the third-party doctrine:
The aggregation of records compounded the invasiveness and impact of the NSA program upon Moalin’s privacy because the government acquires more information about any given individual by monitoring the call records of that individual’s contacts—and by monitoring the call records of those contacts’ contacts.
…
As a result, it would be particularly inappropriate to hold that Smith—again, a case involving a very short-term and particularized, individualized surveillance of a person suspected of already having committed the specific crime under investigation—permitted the warrantless surveillance—including not only collection, but aggregation, retention, and review—of Moalin’s telephone metadata when the Supreme Court has expressly recognized that long-term dragnet surveillance raises distinct constitutional concerns.
In a previous government court filing, then-NSA director Gen. Keith Alexander testified that the NSA had reviewed a phone number “associated with Al-Qaeda,” and the agency saw that this number had “touched a phone number in San Diego.” Finally, Alexander said, the NSA observed Moalin’s number “talking to a facilitator in Somalia” in 2007.
Moalin’s lawyers fired back against this line of reasoning:
This information is relevant. Consistent with the FIG assessment, the 2003 investigation of Moalin “did not find any connection to terrorist activity.” The raw material for this finding would have established Moalin’s lack of connection to terrorist activity. (CR 345-3 at 18.)
What it meant to “touch” a number “associated with al-Qaeda[,]” raises questions. First, what does “associated” with al-Qaeda mean? The trial theory was that Moalin was contacting Aden Ayrow of al-Shabaab, and not someone “associated” with al-Qaeda. Second, was Moalin’s number in direct contact or was it a “hop” or two or three (or even more) away?
The government’s response is due April 15, 2016. With any luck, oral arguments will take place before the end of 2016.
"Backdoor searches" ?
Case name: United States v. Muhtorov
Status: Pending in District Court of Colorado
While all the previous cases have to do with bulk phone metadata surveillance under the now-defunct Section 215 of the Patriot Act, there’s is another case we singled out last year that involves another surveillance law.
Many different types of digital surveillance are authorized under the particularly thorny Section 702 of the FISA Amendment Act. This authorizes PRISM and "upstream" collection programs like XKeyscore, which can capture digital content (not just metadata) primarily where one party is a non-US person outside the US. Executive Order 12333 is believed to generally cover instances where both parties are non-US persons and are both overseas, although EO 12333 can "incidentally" cover wholly domestic communication as well. And with Section 215 now gone, cases under Section 702 take on greater importance.
This particular case begins in February 2013 with Clapper v. Amnesty International. The Supreme Court decided via a 5-4 decision that even groups with substantial reasons to believe that their communications are being surveilled by government intelligence agencies—such as journalists, activists, and attorneys with contacts overseas—have no standing to sue the federal government. The reason? They can't prove that they have been actively monitored. It's a major catch-22 since those who were being watched weren't exactly going to be told about the surveillance. But all that changed in October 2013 when the Justice Department altered its policy, stating that when prosecutors used warrantless wiretaps against criminal defendants, the defendants must be told.
Jamshid Muhtorov became the first such person to receive such a notification. The Uzbek human rights activist has lived in the US as a permanent resident and refugee since 2007. He's accused of providing material support and resources to the Islamic Jihad Union (IJU), and the US believes the IJU is an Islamic terrorist group. His criminal trial was scheduled to begin in April 2012, but it became beset with delays. Muhtorov pleaded not guilty during his arraignment hearing in March 2012. And in January 2014, Muhtorov became the first person to challenge warrantless collection of specific evidence in a criminal case against him.
In the latest development (from November 2015), US District Judge John Kane ruled against Muhtorov’s January 2014 motion to suppress evidence obtained under Section 702. As the judge concludes:
Mr. Muhtorov argues that § 702's minimization procedures are inadequate (and the approval scheme therefore constitutionally unreasonable) because they allow the government to maintain a database of incidentally collected information and query it for law enforcement purposes later. These “backdoor searches,” Muhtorov concludes, require a warrant and render the FAA approval scheme unconstitutional. I disagree. Accessing stored records in a database legitimately acquired is not a search in the context of the Fourth Amendment because there is no reasonable expectation of privacy in that information. Evidence obtained legally by one police agency may be shared with similar agencies without the need for obtaining a warrant, even if sought to be used for an entirely different purpose. This principle applies to fingerprint databases and has also been applied in the foreign intelligence context in Jabara v. Webster, 691 F.2d 272, 27779 (6th Cir. 1982).
The next hearing is currently set for January 4, 2016, and Ars plans on attending.
Cyrus is a former Senior Tech Policy Reporter at Ars Technica, and is also a radio producer and author. His latest book, Habeas Data, about the legal cases over the last 50 years that have had an outsized impact on surveillance and privacy law in America, is out now from Melville House. He is based in Oakland, California.
56 Comments
| 2024-11-07T14:59:51 | en | train |
10,821,026 | Shivetya | 2016-01-01T03:36:21 | A Glove That Lets You Feel What's Far Below the Water | null | http://www.popsci.com/glove-that-lets-you-feel-whats-under-water | 1 | 0 | null | null | null | no_error | A Glove That Lets You Feel What's Far Below The Water | null | Haniya Rae |
We may earn revenue from the products available on this page and participate in affiliate programs. Learn more ›
A haptic sonar glove developed by Ph.D. candidates Aisen Carolina Chacin and Takeshi Ozu of the Empowerment Informatics program at Tsukuba University in Japan allows wearers to “feel” objects that are just out of reach in underwater settings. In situations where there’s limited visibility, like flooded streets in an emergency, gloves like these could prove especially useful.
Inspired by the dolphin, IrukaTact (iruka means ‘dolphin’ in Japanese) uses echolocation to detect objects below the water, and provides haptic feedback to the wearer with pulsing jets of water. As the wearer’s hand floats closer to a sunken object, the stronger the jets become, and the wearer feels more pressure on her fingertips. Since the apparatus has minimal bulk, the wearer can grasp objects easily after they’ve been found.
“Our overall goal was to expand haptics,” says Chacin. “How can you feel different textures or sense depth without actually touching the object? Vibration alone doesn’t cut it for me, or most people, for that matter.”
The glove uses a MaxBotix MB7066 sonar sensor, three small motors, and an Arduino Pro Mini, and is programmed to send signals to the three middle fingers in silicone thimbles. The motors are placed on top of the index, middle, and ring fingers, and pump water from the surrounding environment. This water is siphoned onto the wearer’s fingertips to create pressure feedback. The thumb and pinky are left free in order to reduce clunkiness, save battery power, and improve movement. A silicone ring around the middle finger, connected to the sensor at the wrist by a small tube encasing the sensor’s wires, keeps the sensor parallel with the hand and allows it to read information from the direction the palm is facing. The sensor can receive and send signals from up to 2 feet of distance underwater, though Chacin says in the future it’d be possible to expand this range.
Chacin and Ozu, in collaboration with Ars Electronica, designed the glove as a DIY kit in hopes that the glove could potentially be used to search for victims, sunken objects, or hazards like sinkholes.
The glove could also be paired with a device like the Oculus Rift and outfitted with gyroscopes and accelerometers to provide haptic feedback in virtual reality.
| 2024-11-08T03:59:35 | en | train |
10,821,178 | apayan | 2016-01-01T05:29:34 | My Unwanted Sabbatical, Why I Went to Prison for Teaching Physics | null | http://iranpresswatch.org/post/13704/ | 2 | null | null | null | true | no_error | BIHE Professor | My Unwanted Sabbatical, Why I Went to Prison for Teaching Physics | 2015-12-22T18:18:08+00:00 | editor |
December 22, 2015
, , 2 Comments
Source: www.technologyreview.com
Mahmoud Badavam and family on the day of his release from prison.
On April 30, 2015, I was standing behind the very tall and heavy door of Rajaee-Shahr prison in the suburbs of Tehran, anxiously waiting for a moment I’d been imagining for four years. At last, the door opened and I could see the waiting crowd that included my family, friends, and former students. The first thing I did was hug my wife. We were both crying.
In 2011, my career had taken an unexpected and unusual turn: I was imprisoned for the crime of teaching physics at an “unofficial” university called the Baha’i Institute for Higher Education (BIHE).
Iran’s Baha’i community created BIHE in the 1980s after our youth were banned from Iranian universities. I began volunteering there in 1989 after serving three years in prison for simply being an active Baha’i. At BIHE, I taught physics and electronics and, as a member of BIHE’s e-learning committee, was a liaison with MIT’s OpenCourseWare Consortium. When I was arrested in 2011, I was on the engineering faculty at BIHE on top of my day job at an engineering company.
After six months in solitary confinement, I joined 70 to 80 fellow prisoners of conscience (many of us Baha’is); I shared a two-by-four-meter room with five others. I spent most of my time meditating, praying, and reading any available books. I wrote letters to friends and family, talked to fellow prisoners, and taught English.
Weekly visits with my wife and daughter (and sometimes my sister) provided a connection to the outside world. They brought news of calls, e-mails, and visits from my friends and colleagues. Once my daughter brought me a copy of MIT Technology Review, which I read line by line and page by page, including all the advertisements! But the authorities did not allow me to receive the next issue, because it was in English and no one there could verify its contents.
Now that some months have passed since the day of my freedom, I am back to almost normal life and work at the same engineering company. But my heart is still with my fellow prisoners at Rajaee-Shahr.
The experience of being a prisoner showed me there is a lot in our daily lives that we take for granted. After being in solitary for months, I was given access to a 12-inch TV. Looking at the colors on the screen was very exciting—blue, red, green, pink. I hadn’t appreciated the importance of color until I went without it. And now whenever I walk with my wife, I am very conscious of how dear these moments are, and I try to enjoy every one. My wife suffered much more than I did. I have committed myself to comforting her for the rest of my life.
My gratitude for many things also increased: the love of my wife and daughter, the respect of my former students and friends. Now many of my students have graduated and are responsible people with respectable careers as engineers and managers. Many bring their children to visit me. I am happy that I have had a tiny share in their success, and when I look at them I sometimes think the whole prison term was worthwhile.
In prison, I had a chance to read Nelson Mandela’s inspiring book Long Walk to Freedom several times. In it, he writes that education is the great engine of personal development. Through education, the daughter of a peasant can become a doctor, the son of a mine worker can become the head of the mine, a child of farm workers can become the president of a great nation. This was exactly what I wanted to do at BIHE for the young Baha’is.
I am sharing my story with the MIT community because MIT means a lot to me and I follow its news closely. MIT has been involved with BIHE since the beginning, with several MIT alumni, staff members, and academics lending their support. In September 1999, Chuck Vest joined the presidents of several other U.S. universities to appeal to the Iranian government to restore education to Baha’i youth in Iran.
My incarceration taught me that we, the privileged and educated population, have a great responsibility to the world. Humanity is suffering from prejudice, poverty, and lack of democracy. As engineers and scientists, we can do much to address these issues.
Mahmoud Badavam, SM ’78, who works for an engineering consulting company in Tehran, has not resumed teaching physics at BIHE. But he hopes that it will one day be legal to do so
| 2024-11-07T19:19:19 | en | train |
10,821,183 | jonbaer | 2016-01-01T05:33:20 | Time Warps and Black Holes: The Past, Present and Future of Space-Time | null | http://www.space.com/31495-space-time-warps-and-black-holes.html | 2 | 0 | null | null | null | no_error | Time Warps and Black Holes: The Past, Present & Future of Space-Time | 2015-12-31T15:51:27+00:00 | Nola Taylor Tillman |
A massive object like the Earth will bend space-time, and cause objects to fall toward it.
(Image credit: Science@NASA)
When giving the coordinates for a location, most people provide the latitude, longitude and perhaps altitude. But there is a fourth dimension often neglected: time. The combination of the physical coordinates with the temporal element creates a concept known as space-time, a background for all events in the universe."In physics, space-time is the mathematical model that combines space and time into a single interwoven continuum throughout the universe," Eric Davis, a physicist who works at the Institute for Advanced Studies at Austin and with the Tau Zero Foundation, told Space.com by email. Davis specializes in faster-than-light space-time and anti-gravity physics, both of which use Albert Einstein's general relativity theory field equations and quantum field theory, as well as quantum optics, to conduct lab experiments."Einstein's special theory of relativity, published in 1905, adapted [German mathematician] Hermann Minkowski's unified space-and-time model of the universe to show that time should be treated as a physical dimension on par with the three physical dimensions of space — height, width and length — that we experience in our lives," Davis said. [Einstein's Theory of Relativity Explained (Infographic)]"Space-time is the landscape over which phenomena take place," added Luca Amendola, a member of the Euclid Theory Working Group (a team of theoretical scientists working with the European Space Agency's Euclid satellite) and a professor at Heidelberg University in Germany. "Just as any landscape is not set in stone, fixed forever, it changes just because things happen — planets move, particles interact, cells reproduce," he told Space.com via email.The history of space-timeThe idea that time and space are united is a fairly recent development in the history of science."The concepts of space remained practically the same from the early Greek philosophers until the beginning of the 20th century — an immutable stage over which matter moves," Amendola said. "Time was supposed to be even more immutable because, while you can move in space the way you like, you cannot travel in time freely, since it runs the same for everybody."In the early 1900s, Minkowski built upon the earlier works of Dutch physicist Hendrik Lorentz and French mathematician and theoretical physicist Henri Poincare to create a unified model of space-time. Einstein, a student of Minkowski, adapted Minkowski's model when he published his special theory of relativity in 1905.Breaking space news, the latest updates on rocket launches, skywatching events and more!"Einstein had brought together Poincare's, Lorentz's and Minkowski's separate theoretical works into his overarching special relativity theory, which was much more comprehensive and thorough in its treatment of electromagnetic forces and motion, except that it left out the force of gravity, which Einstein later tackled in his magnum opus general theory of relativity," Davis said.Space-time breakthroughsIn special relativity, the geometry of space-time is fixed, but observers measure different distances or time intervals according to their own relative velocity. In general relativity, the geometry of space-time itself changes depending on how matter moves and is distributed."Einstein's general theory of relativity is the first major theoretical breakthrough that resulted from the unified space-time model," Davis said.General relativity led to the science of cosmology, the next major breakthrough that came thanks to the concept of unified space-time."It is because of the unified space-time model that we can have a theory for the creation and existence of our universe, and be able to study all the consequences that result thereof," Davis said.He explained that general relativity predicted phenomena such as black holes and white holes. It also predicts that they have an event horizon, the boundary that marks where nothing can escape, and the point of singularities at their center, a one dimensional point where gravity becomes infinite. General relativity could also explain rotating astronomical bodies that drag space-time with them, the Big Bang and the inflationary expansion of the universe, gravity waves, time and space dilation associated with curved space-time, gravitational lensing caused by massive galaxies, and the shifting orbit of Mercury and other planetary bodies, all of which science has shown true. It also predicts things such as warp-drive propulsions and traversable wormholes and time machines."All of these phenomena rely on the unified space-time model," he said, "and most of them have been observed."An improved understanding of space-time also led to quantum field theory. When quantum mechanics, the branch of theory concerned with the movement of atoms and photons, was first published in 1925, it was based on the idea that space and time were separate and independent. After World War II, theoretical physicists found a way to mathematically incorporate Einstein's special theory of relativity into quantum mechanics, giving birth to quantum field theory."The breakthroughs that resulted from quantum field theory are tremendous," Davis said.The theory gave rise to a quantum theory of electromagnetic radiation and electrically charged elementary particles — called quantum electrodynamics theory (QED theory) — in about 1950. In the 1970s, QED theory was unified with the weak nuclear force theory to produce the electroweak theory, which describes them both as different aspects of the same force. In 1973, scientists derived the quantum chromodynamics theory (QCD theory), the nuclear strong force theory of quarks and gluons, which are elementary particles.In the 1980s and the 1990s, physicists united the QED theory, the QCD theory and the electroweak theory to formulate the Standard Model of Particle Physics, the megatheory that describes all of the known elementary particles of nature and the fundamental forces of their interactions. Later on, Peter Higgs' 1960s prediction of a particle now known as the Higgs boson, which was discovered in 2012 by the Large Hadron Collider at CERN, was added to the mix.Experimental breakthroughs include the discovery of many of the elementary particles and their interaction forces known today, Davis said. They also include the advancement of condensed matter theory to predict two new states of matter beyond those taught in most textbooks. More states of matter are being discovered using condensed matter theory, which uses the quantum field theory as its mathematical machinery."Condensed matter has to do with the exotic states of matter, such as those found in metallic glass, photonic crystals, metamaterials, nanomaterials, semiconductors, crystals, liquid crystals, insulators, conductors, superconductors, superconducting fluids, etc.," Davis said. "All of this is based on the unified space-time model."The future of space-timeScientists are continuing to improve their understanding of space-time by using missions and experiments that observe many of the phenomena that interact with it. The Hubble Space Telescope, which measured the accelerating expansion of the universe, is one instrument doing so. NASA's Gravity Probe B mission, which launched in 2004, studied the twisting of space-time by a rotating body — the Earth. NASA's NuSTAR mission, launched in 2012, studies black holes. Many other telescopes and missions have also helped to study these phenomena.On the ground, particle accelerators have studied fast-moving particles for decades."One of the best confirmations of special relativity is the observations that particles, which should decay after a given time, take in fact much longer when traveling very fast, as, for instance, in particle accelerators," Amendola said. "This is because time intervals are longer when the relative velocity is very large."Future missions and experiments will continue to probe space-time as well. The European Space Agency-NASA satellite Euclid, set to launch in 2020, will continue to test the ideas at astronomical scales as it maps the geometry of dark energy and dark matter, the mysterious substances that make up the bulk of the universe. On the ground, the LIGO and VIRGO observatories continue to study gravitational waves, ripples in the curvature of space-time."If we could handle black holes the same way we handle particles in accelerators, we would learn much more about space-time," Amendola said.Merging black holes create ripples in space-time in this artist's concept. Experiments are searching for these ripples, known as gravitational waves, but none have been detected. (Image credit: Swinburne Astronomy Productions)Understanding space-timeWill scientists ever get a handle on the complex issue of space-time? That depends on precisely what you mean."Physicists have an excellent grasp of the concept of space-time at the classical levels provided by Einstein's two theories of relativity, with his general relativity theory being the magnum opus of space-time theory," Davis said. "However, physicists do not yet have a grasp on the quantum nature of space-time and gravity."Amendola agreed, noting that although scientists understand space-time across larger distances, the microscopic world of elementary particles remains less clear."It might be that space-time at very short distances takes yet another form and perhaps is not continuous," Amendola said. "However, we are still far from that frontier."Today's physicists cannot experiment with black holes or reach the high energies at which new phenomena are expected to occur. Even astronomical observations of black holes remain unsatisfactory due to the difficulty of studying something that absorbs all light, Amendola said. Scientists must instead use indirect probes."To understand the quantum nature of space-time is the holy grail of 21st century physics," Davis said. "We are stuck in a quagmire of multiple proposed new theories that don't seem to work to solve this problem."Amendola remained optimistic. "Nothing is holding us back," he said. "It's just that it takes time to understand space-time."Follow Nola Taylor Redd on Twitter @NolaTRedd or Google+. Follow us @Spacedotcom, Facebook and Google+. Original article on Space.com.
Join our Space Forums to keep talking space on the latest missions, night sky and more! And if you have a news tip, correction or comment, let us know at: [email protected].
Nola Taylor Tillman is a contributing writer for Space.com. She loves all things space and astronomy-related, and enjoys the opportunity to learn more. She has a Bachelor’s degree in English and Astrophysics from Agnes Scott college and served as an intern at Sky & Telescope magazine. In her free time, she homeschools her four children. Follow her on Twitter at @NolaTRedd
Most Popular
| 2024-11-08T11:40:29 | en | train |
10,821,263 | shawndumas | 2016-01-01T06:28:33 | Hiking Minimum Wage an Inefficient Tool to Fight Poverty: Fed Research | null | http://www.nbcnews.com/business/economy/hiking-minimum-wage-inefficient-tool-fight-poverty-fed-research-n488111?cid=sm_tw&hootPostID=d54005cf9aa5678fcc7e97e310e5f2b4 | 5 | 0 | null | null | null | no_error | Hiking Minimum Wage an Inefficient Tool to Fight Poverty: Fed Research | 2015-12-30T20:33:29.000Z | By Jeff Cox, CNBC | Increasing the minimum wage is an inefficient way to reduce poverty, according to a Fed research paper that comes amid a national clamor to hike pay for workers at the low end of the salary scale.Fast-food workers and their supporters join a nationwide protest for higher wages and union rights outside McDonald's in Los Angeles on Nov. 10.Lucy Nicholson / ReutersDavid Neumark, visiting scholar at the San Francisco Fed, contends in the paper that raising the minimum wage has only limited benefits in the war against poverty, due in part because relatively few of those falling below the poverty line actually receive the wage.Many of the benefits from raising the wage, a move already undertaken by multiple governments around the country as well as some big-name companies, tend to go to higher-income families, said Neumark, who also pointed to research that shows raising wages kills jobs through higher costs to employers. Neumark is a professor of economics and director of the Center for Economics and Public Policy at the University of California, Irvine."Setting a higher minimum wage seems like a natural way to help lift families out of poverty. However, minimum wages target individual workers with low wages, rather than families with low incomes," he wrote. "Other policies that directly address low family income, such as the earned income tax credit, are more effective at reducing poverty."13 States to Raise Minimum Wage in 2016His conclusions drew a response from advocates for raising the wage who said the argument that boosting wages would cost jobs has been proven invalid and that an increase would help cut into poverty levels."The mainstream view, as illustrated by meta-surveys of the whole minimum wage research field, is that the job loss effects of raising the minimum wage are very, very small," Paul Sohn, general counsel for the National Employment Law Project, said in an email to CNBC.com. An NELP study "shows that the bulk of rigorous minimum wage studies show instead that raising the minimum wage boosts incomes for low-wage workers with only very small adverse impacts on employment."The U.S. poverty rate has been fairly flat in recent years but actually was 2.3 percent higher at the end of 2014 than it was before the Great Recession in 2008, according to the Census Bureau. Advocates for the poor believe raising the minimum wage is a linchpin in helping to eradicate poverty, and 29 states plus the District of Columbia have minimums above the national floor of $7.25.Fighting poverty, though, is more complicated than raising wages.Five Reasons Why Job Creation Is so WeakDemographically, about half of the 3 million or so workers receiving the minimum are 16 to 24 years old, with the highest concentration in the leisure and hospitality industry, according to the Bureau of Labor Statistics. Moreover, the percentage of workers at or below the minimum is on the decline, falling to 3.9 percent in 2014 from the most recent high of 6 percent in 2010.Neumark also points out that many of those receiving the wage aren't poor — there are no workers in 57 percent of families below the poverty line, while 46 percent of poor workers are getting paid more than $10.10 an hour, and 36 percent are making more than $12 an hour, he said."Mandating higher wages for low-wage workers does not necessarily do a good job of delivering benefits to poor families," Neumark wrote. "Simple calculations suggest that a sizable share of the benefits from raising the minimum wage would not go to poor families."Increasing the earned income tax credit is a more effective way to fight poverty, he said. A family of four can get a credit of up to $5,548, which Neumark said is more tailored toward low-income families than hikes in the minimum wage."The earned income tax credit targets low-income families much better, increases employment and reduces poverty, and for all these reasons seems far more effective," he wrote. "Policymakers are likely to do a better job fighting poverty by making the EITC more generous than by raising the minimum wage. Furthermore, using both of these policies together is more effective than minimum wage increases in isolation."Jeff Cox, CNBCJeff Cox is a finance editor with CNBC.com where he covers all aspects of the markets and monitors coverage of the financial markets and Wall Street. His stories are routinely among the most-read items on the site each day as he interviews some of the smartest and most well-respected analysts and advisors in the financial world.Over the course of a journalism career that began in 1987, Cox has covered everything from the collapse of the financial system to presidential politics to local government battles in his native Pennsylvania. | 2024-11-08T09:54:16 | en | train |
10,821,336 | waruqi | 2016-01-01T07:22:21 | Itrace v1.3 released | null | https://github.com/waruqi/itrace | 1 | 0 | null | null | null | no_error | GitHub - hack0z/itrace: 🍰 Trace objc method call for ios and mac | null | hack0z |
itrace
Trace objc method call for ios and mac
如果你想逆向 某些app的调用流程 或者 系统app的一些功能的 私有 framework class api 调用流程, 可以试试此工具
只需要 配置需要挂接的 类名和app名, 就可以实时追踪 相关功能的 调用流程。 支持批量 hook n多个类名
特性
批量跟踪ios下指定class对象的所有调用流程
支持ios for armv6,armv7,arm64 以及mac for x86, x64
自动探测参数类型,并且打印所有参数的详细信息
更新内容
增加对arm64的支持,刚调通稳定性有待测试。
arm64进程注入没时间做了,暂时用了substrate的hookprocess, 所以大家需要先装下libsubstrate.dylib
armv7的版本是完全不依赖substrate的。
arm64的版本对参数的信息打印稍微做了些增强。
注:此项目已不再维护,仅供参考。
配置需要挂接的class
修改itrace.xml配置文件,增加需要hook的类名:
<?xml version="1.0" encoding="utf-8"?>
<itrace>
<class>
<SSDevice/>
<SSDownload/>
<SSDownloadManager/>
<SSDownloadQueue/>
<CPDistributedMessagingCenter/>
<CPDistributedNotificationCenter/>
<NSString args="0"/>
</class>
</itrace>
注: 尽量不要去hook, 频繁调用的class, 比如 UIView NSString, 否则会很卡,操作就不方便了。
注: 如果挂接某个class, 中途打印参数信息挂了, 可以在对应的类名后面 加上 args="0" 属性, 来禁止打印参数信息, 这样会稳定点。
如果要让所有类都不打印参数信息, 可以直接设置:
安装文件
将整个itracer目录下的所有文件用手机助手工具,上传到ios系统上的 /tmp 下面:
/tmp/itracer
/tmp/itrace.dylib
/tmp/itrace.xml
进行trace
进入itracer所在目录:
修改执行权限:
运行程序:
./itracer springboard (spingboard 为需要挂接的进程名, 支持简单的模糊匹配)
使用substrate进行注入itrace.dylib来trace
在ios arm64的新设备上,使用itracer注入itrace.dylib已经失效,最近一直没怎么去维护,如果要在在arm64上注入进行trace,可以借用substrate,将itrace.dylib作为substrate插件进行注入
再配置下itrace.plist指定需要注入到那个进程中就行了,具体可以看下substrate的插件相关文档。
放置itrace.dylib和itrace.plist到substrate插件目录/Library/MobileSubstrate/DynamicLibraries,放好后使用ldid -S itrace.dylib处理一下,然后重启需要trace的进程即可, itrace.xml的配置文件路径变为/var/root/itrace/itrace.xml。
ios arm64设备的itrace.dylib 编译时使用 xmake f -p iphoneos -a arm64命令。
查看 trace log, 注: log 的实际输出在: 控制台-设备log 中:
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager _copyDownloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue _sendDownloadStatusChangedAtIndex:]: 0
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue _messageObserversWithFunction:context:]: 0x334c5d51: 0x2fe89de0
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadQueue downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager downloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownloadManager _copyDownloads]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownload cachedApplicationIdentifier]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownload status]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [SSDownload cachedApplicationIdentifier]
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [CPDistributedNotificationCenter postNotificationName:userInfo:]: SBApplicationNotificationStateChanged: {
SBApplicationStateDisplayIDKey = "com.apple.AppStore";
SBApplicationStateKey = 2;
SBApplicationStateProcessIDKey = 5868;
SBMostElevatedStateForProcessID = 2;
}
Jan 21 11:12:58 unknown SpringBoard[5706] <Warning>: [itrace]: [3edc9d98]: [CPDistributedNotificationCenter postNotificationName:userInfo:toBundleIdentifier:]: SBApplicationNotificationStateChanged: {
SBApplicationStateDisplayIDKey = "com.apple.AppStore";
SBApplicationStateKey = 2;
SBApplicationStateProcessIDKey = 5868;
SBMostElevatedStateForProcessID = 2;
}: null
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadManager _handleMessage:fromServerConnection:]: 0xe6920b0: 0xe007040
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadManager _handleDownloadStatesChanged:]: 0xe6920b0
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadManager _copyDownloads]
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownload persistentIdentifier]
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownload _addCachedPropertyValues:]: {
I = SSDownloadPhaseDownloading;
}
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownload _applyPhase:toStatus:]: SSDownloadPhaseDownloading: <SSDownloadStatus: 0xe6b8e80>
Jan 21 11:12:59 unknown SpringBoard[5706] <Warning>: [itrace]: [105d7000]: [SSDownloadQueue downloadManager:downloadStatesDidChange:]: <SSDownloadManager: 0x41ea60>: (
"<SSDownload: 0xe6bd970>: -4085275246093726486"
)
####如何编译
编译ios版本:
xmake f -p iphoneos
xmake
xmake f -p iphoneos -a arm64
xmake
编译macosx版本:
更详细的xmake使用,请参考:xmake文档
依赖库介绍:tbox
联系方式
邮箱:[email protected]
主页:TBOOX开源工程
QQ群:343118190(TBOOX开源工程), 260215194 (ihacker ios逆向分析)
微信公众号:tboox-os
| 2024-11-08T11:01:55 | en | train |
10,821,365 | pmontra | 2016-01-01T07:42:26 | Facebook’s Controversial Free Basics Program Shuts Down in Egypt | null | http://techcrunch.com/2015/12/31/facebooks-controversial-free-basics-program-shuts-down-in-egypt/ | 4 | 0 | null | null | null | no_error | Facebook's Controversial Free Basics Program Shuts Down In Egypt | TechCrunch | 2015-12-31T08:36:48+00:00 | Catherine Shu |
Free Basics, a Facebook program that gives free access to certain Internet services, has been shut down in Egypt. The news comes the week after India’s telecom regulator ordered the suspension of Free Basics as it prepares to hold public hearings on net neutrality.
A report from Reuters cites a government official who said the service was suspended because Facebook had not renewed a necessary permit, and not related to security concerns.
A Facebook spokesperson confirmed the shut down in an emailed statement, but did not disclose the reason behind the suspension:
“We’re disappointed that Free Basics will no longer available in Egypt as of December 30, 2015. Already more than 3 million Egyptians use Free Basics and through Free Basics more than 1 million people who were previously unconnected are now using the internet because of these efforts. We are committed to Free Basics, and we’re going to keep working to serve our community to provide access to connectivity and valuable services. We hope to resolve this situation soon.”
Free Basics was available in Egypt on telecom Etisalat Egypt’s network. The program, which is run by Facebook’s Internet.org initiative, lets subscribers to its telecom partners access a limited group of services and websites, like Wikipedia, Bing search, and BBC News, without data charges.
While Free Basics, which has launched in 37 countries so far, is meant to help more people in emerging economies get online, critics say that it violates net neutrality and question Facebook’s motives, since the services included in Free Basics include both its social network and Facebook Messenger.
The controversy has become especially acute in India, Facebook’s second biggest market outside of the United States. Facebook arguably made a major public relations mishap there with its “Save Free Basics” campaign, which called on Facebook users to send a pre-filled email to the Telecom Regulatory Authority of India supporting the program. The company also purchased newspaper and billboard advertisements to defend Free Basics. Many people, however, found the campaign misleading. In response, Facebook chief executive officer Mark Zuckerberg defended the program in an opinion piece for The Times of India, comparing Free Basics to public libraries, while Internet.org vice president took part in a Reddit AMA.
Most Popular
Catherine Shu covered startups in Asia and breaking news for TechCrunch. Her reporting has also appeared in the New York Times, the Taipei Times, Barron’s, the Wall Street Journal and the Village Voice. She studied at Sarah Lawrence College and the Columbia Graduate School of Journalism.
Disclosures: None
View Bio
Newsletters
Subscribe for the industry’s biggest tech news
Related
Latest in TC
| 2024-11-07T23:25:41 | en | train |
10,821,384 | divramis | 2016-01-01T07:57:18 | SEO+:+Hosting+σε+Ελληνικό+server+-+SEO+|+WEB+DESIGN | null | http://paramarketing.gr/seo-hosting-%cf%83%ce%b5-%ce%b5%ce%bb%ce%bb%ce%b7%ce%bd%ce%b9%ce%ba%cf%8c-server/ | 1 | null | null | null | true | no_error | SEO : Hosting σε Ελληνικό server - Divramis | 2014-04-22T17:00:41+03:00 | null |
Ένας από τους 200 παράγοντες SEO της Google είναι το hosting από γεωγραφική και λειτουργική άποψη.
Άμα το site σας απευθύνεται στην ελληνική αγορά, πρέπει να προτιμήσετε καταλήξεις domain name σε .gr, όπως .gr, .org.gr, edu.gr, net.gr και .com.gr και ελληνικό hosting.
Άμα το site σας απευθύνεται στην διεθνή αγορά, μπορείτε να επιλέξετε όλες τις άλλες διαθέσιμες καταλήξεις domain name που είναι πάνω από 700 με προτίμηση αυτές σε .com .net και .org και hosting εγγύτερα στην αγορά στόχο σας. Αν η αγορά σας είναι στην Ευρώπη προτιμήστε servers στην Ευρωπαϊκή χώρα με την μεγαλύτερη ομάδα πελατών ή αγορά στόχο και αντίστοιχα για τις άλλες χώρες.
Σε περιπτώσεις πολλών αγορών ή διεθνών αγορών, εκτός από διαφορετικά sites, ένα για κάθε αγορά, απαιτούνται και πολλοί διαφορετικοί servers στις ανάλογες αγορές.
Η αγορά του ελληνικού Hosting σήμερα
Μετά από εκτεταμένη έρευνα και αφού εξέτασα κάμποσες ελληνικές εταιρείες με ελληνικό hosting – Παρεμπιπτόντως, άλλο το ελληνικό hosting και άλλο το ελληνική εταιρεία με γερμανικό ή αμερικάνικο hosting – κατέληξα στην αξιόπιστη ελληνική εταιρεία TopHost.
Εταιρείες όμως που προσφέρουν αξιόλογο ελληνικό hosting με έδρα την Αθήνα, μια και εκεί κατοικεί το 50% των υποψήφιων πελατών σας είναι οι παρακάτω:
Papaki.gr
Tophost.gr
Pointer.gr
Dnhost.gr
Otenet.gr
Cyta.gr
Ο κατάλογος δεν σταματά εδώ μια και θα βρείτε πάρα πολλές άλλες αδικημένες από την Google εταιρείες hosting που ενώ είναι πολύ καλές και ποιοτικές στις παρεχόμενες υπηρεσίες, δεν έχουν καμία ορατότητα στις μηχανές αναζήτησης. Ο αλγόριθμος της Google απλώς της έπνιξε!
Να προτιμήσω το GR-IX δίκτυο;
Ο κόμβος GR-IX αποτελεί το σημείο στο οποίο διασυνδέονται επιλεγμένοι Internet Service Providers στην Ελλάδα. Μέσω του GR-IX υλοποιείται το peering, το οποίο επιτρέπει την απευθείας ανταλλαγή κίνησης μεταξύ των Internet Providers και κατ΄επέκταση επιτυγχάνονται πολύ υψηλές ταχύτητες κατά την διενέργεια της επικοινωνίας.
Τι διαφορά έχει η φιλοξενία του Ultra Fast GR-IX δικτύου με την φιλοξενία που δεν είναι σε GR-IX δίκτυο;
Η διαφορά είναι αρκετά μεγάλη, σε επίπεδο ταχύτητας δικτύου. To web hosting σε Ελληνικό Datacenter δεν είναι από μόνο του επαρκή για να επιτευχθούν οι σύντομοι χρόνοι απόκρισης που προαναφέρθηκαν.
–
Αν για παράδειγμα κάποιος χρήστης επισκεφθεί από Ελλάδα ένα site σε server στην Ελλάδα που δεν είναι σε GR-IX δίκτυο, ο χρόνος φόρτωσης θα είναι αντίστοιχος με τον χρόνο φόρτωσης μιας σελίδας που φιλοξενείται σε server στην Ευρώπη.
–
–
Αντιθέτως οι εταιρείες hosting που βρίσκονται στο GR-IX δίκτυο, εκμεταλλεύονται τις εσωτερικές αποστάσεις και η κίνηση δεδομένων δρομολογείται χωρίς αυτά να περνάνε από κόμβους του εξωτερικού. Έτσι, εξασφαλίζουν για τους πελάτες τους και το site σας τους πιο σύντομους δυνατούς χρόνους φόρτωσης των σελίδων από Ελλάδα.
Οι χρόνοι απόκρισης σε GR-IX επιτυγχάνουν ταχύτητες μέχρι και 5 φορές πιο γρήγορες.
Ενδεικτικές ταχύτητες ping από Ελλάδα:
σε server στην Αμερική 225ms
σε server στην Ευρώπη 93ms
στην Ελλαδα χωρίς GR-IX 80ms
στην Ελλαδα με GR-IX 40ms
Το web hosting στην Ελλάδα ανεβάζει την κατάταξη ενός site στη Google;
Για SEO πρώτη σελίδα στα αποτελέσματα αναζήτησης του Google στην Ελλάδα, θα πρέπει να λάβετε υπόψιν ότι ο αλγόριθμος αξιολόγησης και κατάταξης των sites επηρεάζεται σημαντικά από τους εξής παράγοντες:
την τοποθεσία φιλοξενίας της ιστοσελίδας και
το TLD (.gr).
Σύμφωνα με τον Matt Cutts, ο οποίος είναι υπεύθυνος για την ποιότητα αποτελεσμάτων αναζήτησης στο Google, εκτός από το TLD, η χώρα φιλοξενίας αποτελεί σημαντικό παράγοντα, καθώς η Google ελέγχει την τοποθεσία της IP του server στον οποίο φιλοξενείται το website.
–
Ο λόγος είναι ότι θεωρούν πως ένας server που φιλοξενείται για παράδειγμα στην Ελλάδα, θα εμπεριέχει και περιεχόμενο που θα είναι περισσότερο χρήσιμο σε χρήστες εντός Ελλάδας, σε σχέση με το περιεχόμενο ενός server του εξωτερικού. Στο ακόλουθο video, ο Matt Cutts της Google επιβεβαιώνει τη σημασία που έχει η τοποθεσία μίας IP για το SEO σε τοπικό επίπεδο.
Επομένως, η φιλοξενία σε Ελληνική IP και το ταχύτατο GR-IX δίκτυο, μπορεί να επηρεάσει θετικά την κατάταξη της σελίδας στα αποτελέσματα αναζητήσεων για τους χρήστες του internet εντός Ελλάδας και να σας δώσει το ανταγωνιστικό πλεονέκτημα που ζητάτε.
–
Βάλτε το site σας στην πρώτη σελίδα της Google για πάντα με ελληνικό hosting!
TopHost, o πρωταθλητής του Hosting στην Ελλάδα -τουλάχιστον για αυτό το μήνα
Όλα τα πακέτα της Tophost φιλοξενούνται σε servers στην Ελλάδα και σας δίνουν τη δυνατότητα να εκμεταλλευτείτε ταχύτητες δικτύου έως και 5 φορές πιο γρήγορες από αντίστοιχες στην Αμερική και Ευρώπη. Αξιοποιήστε την διασυνδεσιμότητα του Ultra Fast GR-IX δίκτυου των servers της Tophost, για την πιο γρήγορη εμπειρία φιλοξενίας!
Ποιες υπηρεσίες φιλοξενίας μπορούν να ενεργοποιηθούν στο Tophost Ultra Fast GR-IX δίκτυο στην Ελλάδα;
Οι υπηρεσίες που ενεργοποιούνται στο Datacenter στην Ελλάδα είναι όλα τα πακέτα φιλοξενίας Shared Hosting και Reseller Hosting σε Linux και Windows servers, καθώς και κάποιοι από τους Dedicated Servers της Tophost.
SEO Google και ελληνικό hosting Πηγές:
Podcast: Google και οι 200 παράγοντες SEO
Google και οι 200 παράγοντες SEO
Η μαύρη λίστα του Web Hosting No2
Η μαύρη λίστα του Web Hosting
SEO Google Πρώτη Σελίδα: Πώς να βγείτε στην πρώτη σελίδα της Google
Ζητήστε μια προσφορά σήμερα για κατασκευή ιστοσελίδας
Αν είστε πριν την κατασκευή ή την ανακατασκευή της ιστοσελίδας σας, πριν κάνετε οτιδήποτε ζητήστε μια προσφορά για κατασκευή site ή κατασκευή eshop με το Genesis Theme Framework.
Ζητήστε προσφορά κατασκευής ή προώθησης ιστοσελίδας
Δωρεάν Μαθήματα SEO Αξίας 129€
Πάρτε εντελώς δωρεάν τον οδηγό βίντεο μαθημάτων αξίας 129€ SEO GOOGLE Πρώτη Σελίδα. Είναι πολύ συνετό να αφιερώνετε το 20% του χρόνου και των πόρων σας στην προσωπική σας εκπαίδευση και στην προσωπική σας ανάπτυξη. Γραφτείτε σήμερα στα βίντεο μαθήματα εντελώς δωρεάν!
| 2024-11-08T03:56:03 | el | train |
10,821,392 | Tomte | 2016-01-01T08:05:50 | Teller Reveals His Secrets (2012) | null | http://www.smithsonianmag.com/arts-culture/teller-reveals-his-secrets-100744801/?all?no-ist | 2 | 0 | null | null | null | no_error | Teller Reveals His Secrets | 2012-03-01T00:00:00-05:00 | Smithsonian Magazine |
According to magician Teller, "Neuroscientists are novices at deception. Magicians have done controlled testing in human perception for thousands of years."
Jared McMillen / Aurora Select
In the last half decade, magic—normally deemed entertainment fit only for children and tourists in Las Vegas—has become shockingly respectable in the scientific world. Even I—not exactly renowned as a public speaker—have been invited to address conferences on neuroscience and perception. I asked a scientist friend (whose identity I must protect) why the sudden interest. He replied that those who fund science research find magicians “sexier than lab rats.”
I’m all for helping science. But after I share what I know, my neuroscientist friends thank me by showing me eye-tracking and MRI equipment, and promising that someday such machinery will help make me a better magician.
I have my doubts. Neuroscientists are novices at deception. Magicians have done controlled testing in human perception for thousands of years.
I remember an experiment I did at the age of 11. My test subjects were Cub Scouts. My hypothesis (that nobody would see me sneak a fishbowl under a shawl) proved false and the Scouts pelted me with hard candy. If I could have avoided those welts by visiting an MRI lab, I surely would have.
But magic’s not easy to pick apart with machines, because it’s not really about the mechanics of your senses. Magic’s about understanding—and then manipulating—how viewers digest the sensory information.
I think you’ll see what I mean if I teach you a few principles magicians employ when they want to alter your perceptions.
1. Exploit pattern recognition. I magically produce four silver dollars, one at a time, with the back of my hand toward you. Then I allow you to see the palm of my hand empty before a fifth coin appears. As Homo sapiens, you grasp the pattern, and take away the impression that I produced all five coins from a hand whose palm was empty.
2. Make the secret a lot more trouble than the trick seems worth. You will be fooled by a trick if it involves more time, money and practice than you (or any other sane onlooker) would be willing to invest. My partner, Penn, and I once produced 500 live cockroaches from a top hat on the desk of talk-show host David Letterman. To prepare this took weeks. We hired an entomologist who provided slow-moving, camera-friendly cockroaches (the kind from under your stove don’t hang around for close-ups) and taught us to pick the bugs up without screaming like preadolescent girls. Then we built a secret compartment out of foam-core (one of the few materials cockroaches can’t cling to) and worked out a devious routine for sneaking the compartment into the hat. More trouble than the trick was worth? To you, probably. But not to magicians.
3. It’s hard to think critically if you’re laughing. We often follow a secret move immediately with a joke. A viewer has only so much attention to give, and if he’s laughing, his mind is too busy with the joke to backtrack rationally.
4. Keep the trickery outside the frame. I take off my jacket and toss it aside. Then I reach into your pocket and pull out a tarantula. Getting rid of the jacket was just for my comfort, right? Not exactly. As I doffed the jacket, I copped the spider.
5. To fool the mind, combine at least two tricks. Every night in Las Vegas, I make a children’s ball come to life like a trained dog. My method—the thing that fools your eye—is to puppeteer the ball with a thread too fine to be seen from the audience. But during the routine, the ball jumps through a wooden hoop several times, and that seems to rule out the possibility of a thread. The hoop is what magicians call misdirection, a second trick that “proves” the first. The hoop is genuine, but the deceptive choreography I use took 18 months to develop (see No. 2—More trouble than it’s worth).
6. Nothing fools you better than the lie you tell yourself. David P. Abbott was an Omaha magician who invented the basis of my ball trick back in 1907. He used to make a golden ball float around his parlor. After the show, Abbott would absent-mindedly leave the ball on a bookshelf while he went to the kitchen for refreshments. Guests would sneak over, heft the ball and find it was much heavier than a thread could support. So they were mystified. But the ball the audience had seen floating weighed only five ounces. The one on the bookshelf was a heavy duplicate, left out to entice the curious. When a magician lets you notice something on your own, his lie becomes impenetrable.
7. If you are given a choice, you believe you have acted freely. This is one of the darkest of all psychological secrets. I’ll explain it by incorporating it (and the other six secrets you’ve just learned) into a card trick worthy of the most annoying uncle.
THE EFFECT I cut a deck of cards a couple of times, and you glimpse flashes of several different cards. I turn the cards facedown and invite you to choose one, memorize it and return it. Now I ask you to name your card. You say (for example), “The queen of hearts.” I take the deck in my mouth, bite down and groan and wiggle to suggest that your card is going down my throat, through my intestines, into my bloodstream and finally into my right foot. I lift that foot and invite you to pull off my shoe and look inside. You find the queen of hearts. You’re amazed. If you happen to pick up the deck later, you’ll find it’s missing the queen of hearts.
THE SECRET(S) First, the preparation: I slip a queen of hearts in my right shoe, an ace of spades in my left and a three of clubs in my wallet. Then I manufacture an entire deck out of duplicates of those three cards. That takes 18 decks, which is costly and tedious (No. 2—More trouble than it’s worth).
When I cut the cards, I let you glimpse a few different faces. You conclude the deck contains 52 different cards (No. 1—Pattern recognition). You think you’ve made a choice, just as when you choose between two candidates preselected by entrenched political parties (No. 7—Choice is not freedom).
Now I wiggle the card to my shoe (No. 3—If you’re laughing...). When I lift whichever foot has your card, or invite you to take my wallet from my back pocket, I turn away (No. 4—Outside the frame) and swap the deck for a normal one from which I’d removed all three possible selections (No. 5—Combine two tricks). Then I set the deck down to tempt you to examine it later and notice your card missing (No. 6—The lie you tell yourself).
Magic is an art, as capable of beauty as music, painting or poetry. But the core of every trick is a cold, cognitive experiment in perception: Does the trick fool the audience? A magician’s data sample spans centuries, and his experiments have been replicated often enough to constitute near-certainty. Neuroscientists—well intentioned as they are—are gathering soil samples from the foot of a mountain that magicians have mapped and mined for centuries. MRI machines are awesome, but if you want to learn the psychology of magic, you’re better off with Cub Scouts and hard candy.
Get the latest Travel & Culture stories in your inbox.
Filed Under:
Brain,
Performing Arts
| 2024-11-08T04:01:53 | en | train |
10,821,399 | egfx | 2016-01-01T08:14:28 | Converts Elixir to JavaScript | null | https://github.com/bryanjos/elixirscript | 3 | 0 | null | null | null | no_error | GitHub - elixirscript/elixirscript: Converts Elixir to JavaScript | null | elixirscript |
The goal is to convert a subset (or full set) of Elixir code to JavaScript, providing the ability to write JavaScript in Elixir. This is done by taking the Elixir AST and converting it into JavaScript AST and then to JavaScript code. This is done using the Elixir-ESTree library.
Documentation for current release
Requirements
Erlang 20 or greater
Elixir 1.6 or greater (must be compiled with Erlang 20 or greater)
Node 8.2.1 or greater (only for development)
Usage
Add dependency to your deps in mix.exs:
{:elixir_script, "~> x.x"}
Add elixir_script to list of mix compilers in mix.exs
Also add elixir_script configuration
def project do
[
app: :my_app,
# ...
# Add elixir_script as a compiler
compilers: Mix.compilers ++ [:elixir_script],
# Our elixir_script configuration
elixir_script: [
# Entry module. Can also be a list of modules
input: MyEntryModule,
# Output path. Either a path to a js file or a directory
output: "priv/elixir_script/build/elixirscript.build.js"
]
]
end
Run mix compile
Examples
Application
ElixirScript Todo Example
Library
ElixirScript React
Starter kit
Elixirscript Starter Kit
Development
# Clone the repo
git clone [email protected]:bryanjos/elixirscript.git
#Get dependencies
make deps
# Compile
make
# Test
make test
Communication
#elixirscript on the elixir-lang Slack
Contributing
Please check the CONTRIBUTING.md
| 2024-11-07T20:20:08 | en | train |
10,821,439 | codingdefined | 2016-01-01T08:50:37 | Capture Screen of Web Pages Through URL in Nodejs | null | http://www.codingdefined.com/2016/01/capture-screen-of-web-pages-through-url.html | 3 | 0 | null | null | null | no_error | Capture Screen of Web Pages through URL in Nodejs | null | null |
In this post we will be discussing about how to capture screen of web pages through URL in Node.js. The following code snippet will convert any web URL into a Jpeg image. We will be using PhantomJS which is a headless WebKit scriptable with a JavaScript API. Since PhantomJS is using WebKit, a real layout and rendering engine, it can capture a web page as a screenshot.
To use PhantomJS in Node.js we will be using phantomjs-node (phantom npm module) which acts as a bridge between PhantomJS and Node.js. To use this module you need to install PhantomJS and it should be available in the PATH environment variable. If you get any error while installing please refer to How to solve Cannot find module weak in Nodejs. Then install phantom module by using the command npm install phantom.
Code :
var phantom = require('phantom');
var cLArguments = process.argv.slice(2);
if(cLArguments.length === 1) {
var url = cLArguments[0];
}
if(cLArguments.length > 1) {
var url = cLArguments[0];
var file = cLArguments[1] + '.jpg';
}
console.log(url + ' ' + file);
phantom.create(function(ph) {
console.log('Inside Phantom');
ph.createPage(function(page) {
console.log('Inside Create Page');
page.set('viewportSize', {width: 1920, height: 1080});
page.open(url, function(status) {
if(status === 'success') {
console.log('Success');
page.render(file);
ph.exit();
}
})
})
}, {
dnodeOpts: {
weak: false
}
})
In the above code we will be getting the URL and the name of the file from the command line. Then we are starting the PhantomJS process and creating web page out of it. Then we will be setting a view-port with the height and width. After that we will be opening the URL and if it is a success we will be rendering the file.
Please Like and Share the CodingDefined.com Blog, if you find it interesting and helpful.
| 2024-11-07T20:14:12 | en | train |
10,821,451 | edward | 2016-01-01T08:59:59 | Where are we in the Python 3 transition? | null | http://www.snarky.ca/the-stages-of-the-python-3-transition | 3 | 0 | null | null | null | no_error | Where are we in the Python 3 transition? | 2015-12-31T04:35:00.000Z | Brett Cannon |
Dec 30, 2015
3 min read
Python
The Kübler-Ross model outlines the stages that one goes through in dealing with death:
Denial
Anger
Bargaining
Depression
Acceptance
This is sometimes referred to as the five stages of grief.Some have jokingly called them the five stages of software development. I think it actually matches the Python community's transition to Python 3 rather well, both what has occurred and where we currently are (summary: the community is at least in stage 4 with some lucky to already be at the end in stage 5).
Denial
When Python 3 first came out and we said Python 2.7 was going to be the last release of Python 2, I think some people didn't entirely believe us. Others believed that Python 3 didn't offer enough to bother switching to it from Python 2, and so they ignored Python 3's existence. Basically the Python development team and people willing to trust that Python 3 wasn't some crazy experiment that we were going to abandon, ported their code to Python 3 while everyone else waited.
Anger
When it became obvious that the Python development team was serious about Python 3, some people got really upset. There were accusations of us not truly caring about the community and ignoring that the transition was hurting the community irreparably. This was when whispers of forking Python 2 to produce a Python 2.8 release came about, although that obviously never occurred.
Bargaining
Once people realized that being mad about Python 3 wasn't going to solve anything, the bargaining began. People came to the Python development team asking for features to be added to Python 3 to make transitioning easier such as bringing back the u string prefix in Python 3. People also made requests for exceptions to Python 2's "no new features" policy which were also made to allow for Python 2 to stay a feasible version of Python longer while people transitioned (this all landed in Python 2.7.9). We also extended the maintenance timeline of Python 2.7 from 5 years to 10 years to give people until 2020 to transition before people will need to pay for Python 2 support (as compared to the free support that the Python development team has provided).
Depression
7 years into the life of Python 3, it seems a decent amount of people have reached the point of depression about the transition. With Python 2.7 not about to be pulled out from underneath them, people don't feel abandoned by the Python development team. Python 3 also has enough new features that are simply not accessible from Python 2 that people want to switch. And with porting Python 2 code to run on Python 2/3 simultaneously heavily automated and being doable on a per-file basis, people no longer seem to be adverse to porting their code like they once were (although it admittedly still takes some effort).
Unfortunately people are running up against the classic problem of lacking buy-in from management. I regularly hear from people that they would switch if they could, but their manager(s) don't see any reason to switch and so they can't (or that they would do per-file porting, but they don't think they can convince their teammates to maintain the porting work). This can be especially frustrating if you use Python 3 in personal projects but are stuck on Python 2 at work. Hopefully Python 3 will continue to offer new features that will eventually entice reluctant managers to switch. Otherwise financial arguments might be necessary in the form of pointing out that porting to Python 3 is a one-time cost while staying on Python 2 past 2020 will be a perpetual cost for support to some enterprise provider of Python and will cost more in the long-term (e.g., paying for RHEL so that someone supports your Python 2 install past 2020). Have hope, though, that you can get buy-in from management for porting to Python 3 since others have and thus reached the "acceptance" stage.
Acceptance
While some people feel stuck in Python 2 at work and are "depressed" over it, others have reached the point of having transitioned their projects and accepted Python 3, both at work and in personal projects. Various numbers I have seen this year suggest about 20% of the scientific Python community and 20% of the Python web community have reached this point (I have yet to see reliable numbers for the Python community as a whole; PyPI is not reliable enough for various reasons). I consistently hear from people using Python 3 that they are quite happy; I have yet to hear from someone who has used Python 3 that they think it is a worse language than Python 2 (people are typically unhappy with the transition process and not Python 3 itself).
With five years left until people will need to pay for Python 2 support, I'm glad that the community seems to have reached either the "depression" or "acceptance" stages and has clearly moved beyond the "bargaining" stage. Hopefully in the next couple of years, managers across the world will realize that switching to Python 3 is worth it and not as costly as they think it is compared to having to actually pay for Python 2 support and thus more people will get to move to the "acceptance" stage.
| 2024-11-08T12:09:11 | en | train |
10,821,588 | chei0aiV | 2016-01-01T10:57:27 | SFC 2015 YIR: Laying a Foundation for Growing Outreachy | null | https://sfconservancy.org/blog/2015/dec/31/yir-outreachy/ | 2 | 0 | null | null | null | no_error | 2015 YIR: Laying a Foundation for Growing Outreachy | null | Marina Zhurakhinskaya |
by
on December 31, 2015
[ This blog post is the fifth in our series, Conservancy
2015: Year in Review. ]
Marina Zhurakhinskaya, one of the coordinators of Conservancy's Outreachy program, writes about all the exciting things that happened in Outreachy's first year in its new home at Conservancy.
2015 was a year of transition and expansion
for Outreachy, which was
only possible with the fiscal and legal support Conservancy provided us. Becoming a Conservancy Supporter will ensure
the future in which more free software success stories like Outreachy's are
possible.
Outreachy helps people from groups underrepresented in free software get
involved through paid, mentored, remote internships with a variety of free
software projects. After successfully growing as the GNOME Foundation
project for four years, Outreachy needed a new home which could support its
further growth, be designed to work with a multitude of free software
projects, and provide extensive accounting services. With the current
participation numbers of about 35 interns and 15 sponsoring organizations a
round, and two rounds a year, Outreachy requires processing about 210 intern
payments and 30 sponsor invoices a year. Additionally, Outreachy requires
processing travel reimbursements, preparing tax documents, and providing
letters of participation for some interns. Legal entity hosting Outreachy
needs to enter into participation agreements with interns and mentors, as
well as into custom sponsorship agreements with some sponsors.
In February,
Outreachy announced
its transition to Conservancy and adopted its current name. The
alternative of creating its own non-profit was prohibitive because of the
overhead and time commitment that would have required. Conservancy was a
perfect new home, which provided a lot of the services Outreachy needed and
allowed seamlessly continuing the program throughout 2015. The transition to
Conservancy was completed
in May. 30 interns were accepted for the May-August round
with Karen Sandler, Sarah Sharp, and Marina Zhurakhinskaya serving as
Outreachy's Project Leadership Committee and
coordinators.
With the program's needs met, we were able to turn our minds to expanding
the reach of the program. In September,
Outreachy announced the
expantion to people of color underrepresented in tech in the U.S., while
continuing to be open to cis and trans women, trans men, and genderqueer
people worldwide. This expansion was guided by the lack of diversity
revealed by
the employee
demographic data released by many leading U.S. tech companies. Three new
cooridinators, Cindy Pallares-Quezada, Tony Sebro, and Bryan Smith joined
Karen Sandler, Sarah Sharp, and Marina Zhurakhinskaya to help with the
expansion. 37 interns were accepted for
the December-March
round.
One of the most important measures of success for Outreachy is its alums
speaking at free software conferences. In 2015, 27 alums had full-time
sessions at conferences such as linux.conf.au, LibrePlanet, FOSSASIA,
OpenStack Summit, Open Source Bridge, FISL, and LinuxCon. Isabel Jimenez
gave a keynote
about the benefits of contributing to open source at All Things Open. In a
major recognition for an Outreachy alum, Yan Zhu
was named
among the women to watch in IT security by SC Magazine.
Outreachy coordinators are also being recognized for their contributions
to free and open source software. Sarah
Sharp won the
inaugural Women in Open Source Award, sponsored by Red Hat, and
generously donated her stipend to Outreachy. Marina
Zhurakhinskaya won an
O'Reilly Open Source Award.
Outreachy coordinators, mentors, and alums promoted Outreachy and
diversity in free and open source software in the following articles and
conference sessions:
Karen Sandler spoke about
Outreachy in her FOSDEM
and FISL
keynotes
Marina Zhurakhinskaya moderated and Cindy Pallares-Quezada
participated in the panel
about opportunities in open source at the ACM Richard Tapia Celebration
of Diversity in Computing
Mentor and former career
advisor Sumana Harihareswara wrote about the triumph of
Outreachy, with examples from its history
Alum Sucheta Ghoshal spoke about
her experience with
Outreachy at LibrePlanet and alums Jessica Canepa, Barbara Miller, and
Adam Okoye spoke about their experience
with Outreachy at Open Source Bridge
Linux kernel coordinator
Julia Lawall moderated the panel on
Outreachy internships with the Linux kernel at LinuxCon North America;
panel participants included Karen Sandler, mentors Greg Kroah-Hartman, Jes
Sorensen, and Konrad Wilk, and alums Lidza Louina, Lisa Nguyen, and Elena
Ufimtseva
Marina Zhurakhinskaya
was interviewed about Outreachy and her other diversity
work by
Opensource.com and, for the Ada Lovelace Day, by the Free
Software Foundation
Weaving their work on
Outreachy into their greater involvement in free software diversity efforts,
Sarah Sharp wrote about what
makes a good community on her blog, Marina Zhurakhinskaya gave
a keynote
on effective outreach at Fossetcon, and Cindy Pallares-Quezada wrote an
article on diversity
in open source highlights from 2015 for
Opensource.com
Outreachy is made
possible thanks to the contributions of its many coordinators, mentors, and
sponsors. For May and December rounds, with the credit given for the highest
level of sponsorship, Intel and Mozilla sponsored Outreachy at the Ceiling
Smasher level, Red Hat at the Equalizer level, Google, Hewlett-Packard,
Linux Foundation, and OpenStack Foundation at the Promoter level, and
Cadasta, Electronic Frontier Foundation, Endless, Free Software Foundation,
GNOME, Goldman Sachs, IBM, M-Lab, Mapbox, Mapzen, Mifos, Open Source
Robotics Foundation, Perl, Samsung, Twitter, VideoLAN, Wikimedia Foundation,
and Xen Project at the Includer level. Additionally, Red Hat supports
Outreachy by contributing Marina Zhurakhinskaya's time towards the
organization of the program and the GNOME Foundation provides infrastructure
support. However, first and foremost, Outreachy is possible thanks to
Conservancy being in place to be its non-profit home and handle the fiscal
and legal needs of the program.
Conservancy's service of helping free software projects establish a
foundation for growth without the prohibitive overhead of creating their own
non-profits is a cornerstone of the free software community. We need
Conservancy securely in place to continue providing exceptional support for
its 33 member projects and to offer this support to new projects. To help
free software thrive, please join Outreachy's Project Leadership Committee
members Karen Sandler, Sarah Sharp, and Marina Zhurakhinskaya
in becoming a
Conservancy Supporter.
[permalink]
Tags:
conservancy,
Year In Review 2015
| 2024-11-08T06:03:00 | en | train |
10,821,686 | networked | 2016-01-01T12:06:14 | PCem - an emulator for old x86 computers | null | http://pcem-emulator.co.uk/ | 3 | 0 | null | null | null | no_error | PCem | null | null |
19th December 2021
Michael Manley is taking over as project maintainer, and will be responsible for development and future direction of the project.
The forums have also been reopened.
14th June 2021
Just a quick note to say that I (Sarah Walker) have decided to call it quits. Thanks to those who sent supportive messages, they're genuinely appreciated. Also thanks to those who have supported me and the project over the last decade or so.
If anyone is interested in taking over the project & github repo, please contact me.
1st December 2020
PCem v17 released. Changes from v16 :
New machines added - Amstrad PC5086, Compaq Deskpro, Samsung SPC-6033P, Samsung SPC-6000A, Intel VS440FX, Gigabyte GA-686BX
New graphics cards added - 3DFX Voodoo Banshee, 3DFX Voodoo 3 2000, 3DFX Voodoo 3 3000, Creative 3D Blaster Banshee, Kasan Hangulmadang-16, Trident TVGA9000B
New CPUs - Pentium Pro, Pentium II, Celeron, Cyrix III
VHD disc image support
Numerous bug fixes
A few other bits and pieces
Thanks to davide78, davefiddes, Greatpsycho, leilei, sards3, shermanp, tigerforce and twilen for contributions towards this release.
19th April 2020
PCem v16 released. Changes from v15 :
New machines added - Commodore SL386SX-25, ECS 386/32, Goldstar GDC-212M, Hyundai Super-286TR, IBM PS/1 Model 2133 (EMEA 451), Itautec Infoway Multimidia, Samsung SPC-4620P, Leading Edge Model M
New graphics cards added - ATI EGA Wonder 800+, AVGA2, Cirrus Logic GD5428, IBM 1MB SVGA Adapter/A
New sound card added - Aztech Sound Galaxy Pro 16 AB (Washington)
New SCSI card added - IBM SCSI Adapter with Cache
Support FPU emulation on pre-486 machines
Numerous bug fixes
A few other bits and pieces
Thanks to EluanCM, Greatpsycho, John Elliott, and leilei for contributions towards this release.
19th May 2019
PCem v15 released. Changes from v14 :
New machines added - Zenith Data SupersPort, Bull Micral 45, Tulip AT Compact, Amstrad PPC512/640, Packard Bell PB410A, ASUS P/I-P55TVP4, ASUS P/I-P55T2P4, Epox P55-VA, FIC VA-503+
New graphics cards added - Image Manager 1024, Sigma Designs Color 400, Trigem Korean VGA
Added emulation of AMD K6 family and IDT Winchip 2
New CPU recompiler. This provides several optimisations, and the new design allows for greater portability and more scope for optimisation in the future
Experimental ARM and ARM64 host support
Read-only cassette emulation for IBM PC and PCjr
Numerous bug fixes
Thanks to dns2kv2, Greatpsycho, Greg V, John Elliott, Koutakun, leilei, Martin_Riarte, rene, Tale and Tux for contributions towards this release.
20th April 2018
PCem v14 released. Changes from v13.1 :
New machines added - Compaq Portable Plus, Compaq Portable II, Elonex PC-425X, IBM PS/2 Model 70 (types 3 & 4), Intel Advanced/ZP, NCR PC4i, Packard Bell Legend 300SX, Packard Bell PB520R, Packard Bell PB570, Thomson TO16 PC, Toshiba T1000, Toshiba T1200, Xi8088
New graphics cards added - ATI Korean VGA, Cirrus Logic CL-GD5429, Cirrus Logic CL-GD5430, Cirrus Logic CL-GD5435, OAK OTI-037, Trident TGUI9400CXi
New network adapters added - Realtek RTL8029AS
Iomega Zip drive emulation
Added option for default video timing
Added dynamic low-pass filter for SB16/AWE32 DSP playback
Can select external video card on some systems with built-in video
Can use IDE hard drives up to 127 GB
Can now use 7 SCSI devices
Implemented CMPXCHG8B on Winchip. Can now boot Windows XP on Winchip processors
CD-ROM emulation on OS X
Tweaks to Pentium and 6x86 timing
Numerous bug fixes
Thanks to darksabre76, dns2kv2, EluanCM, Greatpsycho, ja've, John Elliott, leilei and nerd73 for contributions towards this release.
17th December 2017
PCem v13.1 released. This is a quick bugfix release, with the following changes from v13 :
Minor recompiler tweak, fixed slowdown in some situations (mainly seen on Windows 9x just after booting)
Fixed issues with PCJr/Tandy sound on some Sierra games
Fixed plasma display on Toshiba 3100e
Fixed handling of configurations with full stops in the name
Fixed sound output gain when using OpenAL Soft
Switched to using OpenAL Soft by default
12th December 2017
Re-uploaded v13 Windows archive with missing mda.rom included - please re-download if you've been having issues.
11th December 2017
PCem v13 released. Changes from v12 :
New machines added - Atari PC3, Epson PC AX, Epson PC AX2e, GW-286CT GEAR, IBM PS/2 Model 30-286, IBM PS/2 Model 50, IBM PS/2 Model 55SX, IBM PS/2 Model 80, IBM XT Model 286, KMX-C-02, Samsung SPC-4200P, Samsung SPC-4216P, Toshiba 3100e
New graphics cards - ATI Video Xpression, MDSI Genius
New sound cards added - Disney Sound Source, Ensoniq AudioPCI (ES1371), LPT DAC, Sound Blaster PCI 128
New hard drive controllers added - AT Fixed Disk Adapter, DTC 5150X, Fixed Disk Adapter (Xebec), IBM ESDI Fixed Disk Controller, Western Digital WD1007V-SE1
New SCSI adapters added - Adaptec AHA-1542C, BusLogic BT-545S, Longshine LCS-6821N, Rancho RT1000B, Trantor T130B
New network adapters added - NE2000 compatible
New cross-platform GUI
Voodoo SLI emulation
Improvements to Sound Blaster emulation
Improvements to Pentium timing
Various bug fixes
Minor optimisations
Thanks to AmatCoder, basic2004, bit, dns2k, ecksemess, Greatpsycho, hOMER247, James-F, John Elliott, JosepMa, leilei, neozeed, ruben_balea, SA1988 and tomaszkam for contributions towards this release.
18th February 2017
PCem v12 released. Changes from v11 :
New machines added - AMI 386DX, MR 386DX
New graphics cards - Plantronics ColorPlus, Wyse WY-700, Obsidian SB50, Voodoo 2
CPU optimisations - up to 50% speedup seen
3DFX optimisations
Improved joystick emulation - analogue joystick up to 8 buttons, CH Flightstick Pro, ThrustMaster FCS, SideWinder pad(s)
Mouse can be selected between serial, PS/2, and IntelliMouse
Basic 286/386 prefetch emulation - 286 & 386 performance much closer to real systems
Improved CGA/PCjr/Tandy composite emulation
Various bug fixes
Thanks to Battler, leilei, John Elliott, Mahod, basic2004 and ecksemmess for contributions towards this release.
7th June 2016
Updated v11 binary - anyone who's been having problems with Voodoo emulation should re-download.
5th June 2016
PCem v11 released. Changes from v10.1 :
New machines added - Tandy 1000HX, Tandy 1000SL/2, Award 286 clone, IBM PS/1 model 2121
New graphics card - Hercules InColor
3DFX recompiler - 2-4x speedup over previous emulation
Added Cyrix 6x86 emulation
Some optimisations to dynamic recompiler - typically around 10-15% improvement over v10, more when MMX used
Fixed broken 8088/8086 timing
Fixes to Mach64 and ViRGE 2D blitters
XT machines can now have less than 640kb RAM
Added IBM PS/1 audio card emulation
Added Adlib Gold surround module emulation
Fixes to PCjr/Tandy PSG emulation
GUS now in stereo
Numerous FDC changes - more drive types, FIFO emulation, better support of XDF images, better FDI support
CD-ROM changes - CD-ROM IDE channel now configurable, improved disc change handling, better volume control support
Now directly supports .ISO format for CD-ROM emulation
Fixed crash when using Direct3D output on Intel HD graphics
Various other fixes
Thanks to Battler, SA1988, leilei, Greatpsycho, John Elliott, RichardG867, ecksemmess and cooprocks123e for contributions towards this release.
7th November 2015
PCem v10.1 released. This is a minor bugfix release. Changes from v10 :
Fixed buffer overruns in PIIX and ET4000/W32p emulation
Add command line options to start in fullscreen and to specify config file
Emulator doesn't die when the CPU jumps to an unexecutable address
Removed Voodoo memory dump on exit
24th October 2015
PCem v10 released. Changes from v9 :
New machines - AMI XT clone, VTech Laser Turbo XT, VTech Laser XT3, Phoenix XT clone, Juko XT clone, IBM PS/1 model 2011, Compaq Deskpro 386, DTK 386SX clone, Phoenix 386 clone, Intel Premiere/PCI, Intel Advanced/EV
New graphics cards - IBM VGA, 3DFX Voodoo Graphics
Experimental dynamic recompiler - up to 3x speedup
Pentium and Pentium MMX emulation
CPU fixes - fixed issues in Unreal, Half-Life, Final Fantasy VII, Little Big Adventure 2, Windows 9x setup, Coherent, BeOS and others
Improved FDC emulation - more accurate, supports FDI images, supports 1.2MB 5.25" floppy drive emulation, supports write protect correctly
Internal timer improvements, fixes sound in some games (eg Lion King)
Added support for up to 4 IDE hard drives
MIDI OUT code now handles sysex commands correctly
CD-ROM code now no longer crashes Windows 9x when CD-ROM drive empty
Fixes to ViRGE, S3 Vision series, ATI Mach64 and OAK OTI-067 cards
Various other fixes/changes
Thanks to te_lanus, ecksemmess, nerd73, GeeDee, Battler, leilei and kurumushi for contributions towards this release.
4th October 2014
PCem v9 released. Changes from v8.1 :
New machines - IBM PCjr
New graphics cards - Diamond Stealth 3D 2000 (S3 ViRGE/325), S3 ViRGE/DX
New sound cards - Innovation SSI-2001 (using ReSID-FP)
CPU fixes - Windows NT now works, OS/2 2.0+ works better
Fixed issue with port 3DA when in blanking, DOS 6.2/V now works
Re-written PIT emulation
IRQs 8-15 now handled correctly, Civilization no longer hangs
Fixed vertical axis on Amstrad mouse
Serial fixes - fixes mouse issues on Win 3.x and OS/2
New Windows keyboard code - should work better with international keyboards
Changes to keyboard emulation - should fix stuck keys
Some CD-ROM fixes
Joystick emulation
Preliminary Linux port
Thanks to HalfMinute, SA1988 and Battler for contributions towards this release.
3rd January 2014
PCem v8.1 released. This fixes a number of issues in v8.
20th December 2013
PCem v8 released. Changes from v0.7 :
New machines - SiS496/497, 430VX
WinChip emulation (including MMX emulation)
New graphics cards - S3 Trio64, Trident TGUI9440AGi, ATI VGA Edge-16, ATI VGA Charger, OAK OTI-067, ATI Mach64
New sound cards - Adlib Gold, Windows Sound System, SB AWE32
Improved GUS emulation
MPU-401 emulation (UART mode only) on SB16 and AWE32
Fixed DMA bug, floppy drives work properly in Windows 3.x
Fixed bug in FXAM - fixes Wolf 3D, Dogz, some other stuff as well
Other FPU fixes
Fixed serial bugs, mouse no longer disappears in Windows 9x hardware detection
Major reorganisation of CPU emulation
Direct3D output mode
Fullscreen mode
Various internal changes
13th July 2013
PCem is now in source control at http://www.retrosoftware.co.uk/hg/pcem.
3rd August 2012
PCem v0.7 released. Windows 98 now works, Win95 more stable, more machines + graphics cards, and a huge number of fixes.
19th December 2011
PCem v0.6 released. Windows 95 now works, FPU emulation, and load of other stuff.
23rd September 2011
Uploaded a fixed version of PCem v0.5, which has working sound.
21st September 2011
PCem v0.5 released. Loads of fixes + new features in this version.
13th February 2011
PCem v0.41a released. This fixes a disc corruption bug, and re-adds (poor) composite colour emulation.
1st February 2011
PCem v0.41 released. This fixes some embarassing bugs in v0.4, as well as a few games.
27th July 2010
PCem v0.4 released. 386/486 emulation (buggy), GUS emulation, accurate 8088/8086 timings, and lots of other changes.
30th July 2008
PCem v0.3 released. This adds more machines, SB Pro emulation, SVGA emulation, and some other stuff.
14th October 2007
PCem v0.2a released. This is a bugfix release over v0.2.
10th October 2007
PCem v0.2 released. This adds PC1640 and AT emulation, 286 emulation, EGA/VGA emulation, Soundblaster emulation, hard disc emulation, and some bugfixes.
19th August 2007
PCem archive updated with (hopefully) bugfixed version.
15th August 2007
PCem v0.1 released. This is a new emulator for various old XT-based PCs.
| 2024-11-07T23:20:49 | en | train |
10,821,721 | asadjb | 2016-01-01T12:31:58 | Dropletconn: CLI utility to quickly connect to your Digital Ocean droplets | null | https://github.com/theonejb/dropletconn | 3 | 0 | null | null | null | no_error | GitHub - theonejb/dropletconn: A simple golang base CLI app to list and connect to your DigitalOcean droplets | null | theonejb | dropletconn
List and connect to your Digital Ocean droplets instantly (without a .ssh/config)
Quick Start
go get github.com/theonejb/dropletconn
go install github.com/theonejb/dropletconn
dropletconn config
dropletconn list
dropletconn connect <NAME OF DROPLET>
Installing and Configuring dropletconn
Listing your droplets
Connecting to a droplet
Usage
To use, go get github.com/theonejb/dropletconn and go install github.com/theonejb/dropletconn. dropletconn is the
name of the genrated binary. I personally have it aliased to dc using export dc=dropletconn in my .zshrc file since
I use it atleast 20 times a day to connect to various servers at work.
You will also need to generate a token from Digital Ocean API Tokens
that dropletconn will use to get a list of droplets available in your account. For safety, use a Read only scoped token.
Available commands and their usage is described here. Some commands have a short version as well, which is what you see after the OR pipe (|) in their help text below.
config: Generate config file that stores the API token and other settings. This needs to be generated before the rest of
the commands can be used
list | l [<FILTER EXPRESSION>]..: Lists all droplets from your account. You can optionally pass a number of filter expressions.
If you do, only droplets whose names or IPs contain at least one of the given fitler expressions will be listed
connect | c NAME: Connect to the droplet with the given name
run | r <FILTER EXPRESSION> <COMMAND>: Runs the given command on all droplets matching the filter expression. The filter expression is required, and only one filter
expression can be given
You can pass an optional --force-update flag. By default, the list of droplets is cached for a configurable duration (as set in
the config file). Passing this flag forces an update of this list before running the command.
The list command also accepts an options --list-public-ip flag. If this flag is used only the public IP of the nodes is printed, nothing else.
This is incase you want a list of all IPs in your DO account. I needed this to create a Fabric script.
Note: The way flags are parsed, you have to list your flags before your commands. For example, you can not do dropletconn list --list-public-ip.
Instead, you need to do dropletconn --list-public-ip list. Same for the --force-update flag.
To enable completion of droplet names, source the included Zsh completion file. Credit for that script goes to James Coglan. I copied it from his blog
(https://blog.jcoglan.com/2013/02/12/tab-completion-for-your-command-line-apps/).
| 2024-11-08T08:52:03 | en | train |
10,821,797 | SimplyUseless | 2016-01-01T13:13:06 | Web attack knocks BBC websites offline | null | http://www.bbc.co.uk/news/technology-35204915 | 2 | 0 | null | null | null | no_error | Web attack knocks BBC websites offline | 2015-12-31T10:41:54.000Z | BBC News | All the BBC's websites were unavailable early on Thursday morning because of a large web attack.The problems began about 0700 GMT and meant visitors to the site saw an error message rather than webpages.Sources within the BBC said the sites were offline thanks to what is known as a "distributed denial of service" attack.An earlier statement tweeted by the BBC, external laid the blame for problems on a "technical issue".In the message the corporation said it was aware of the ongoing trouble and was working to fix it so sites, services and pages were reachable again. At midday it released another statement saying that the BBC website was now "operating normally". "We apologise for any inconvenience you may have experienced," it said.The BBC has yet to confirm or deny that such an attack was responsible for the problems.It is now believed that a web attack technique known as a "distributed denial of service" was causing the patchy response. This aims to knock a site offline by swamping it with more traffic than it can handle. The attack on the BBC hit the main website as well as associated services including the main iPlayer catch-up service and iPlayer Radio app which were also not working properly. Social media reaction to the trouble was swift. Many urged the BBC to get the site back up quickly and lamented how long it was taking to fix the technical troubles.See more of the tweetsBy 1030 GMT the site was largely working again though some pages and indexes took longer than normal to load. The BBC's crop of websites have suffered other technical problems in the past. In July 2014, the iPlayer and many of its associated sites were offline for almost an entire weekend. That fault was traced to a database that sits behind the catch-up TV service. | 2024-11-08T08:12:45 | en | train |
10,821,882 | empressplay | 2016-01-01T13:58:31 | Perth man gets $330 Uber charge for 20km NYE ride | null | http://www.adelaidenow.com.au/business/companies/perth-man-lodges-complaint-after-copping-massive-uber-bill-on-new-years-eve/news-story/2a9d9f2596f19d7ba0f38a569b3fe574?nk=c8a03f3813ae2218c769e9ef8ed74320-1451656639 | 2 | 0 | null | null | null | no_error | No Cookies | The Advertiser | null | null |
Please note that by blocking any or all cookies you may not have access to certain features, content or personalization. For more information see our Cookie Policy.
To enable cookies, follow the instructions for your browser below.
Facebook App: Open links in External Browser
There is a specific issue with the Facebook in-app browser intermittently making requests to websites without cookies that had previously been set. This appears to be a defect in the browser which should be addressed soon. The simplest approach to avoid this problem is to continue to use the Facebook app but not use the in-app browser. This can be done through the following steps:
1. Open the settings menu by clicking the hamburger menu in the top right
2. Choose “App Settings” from the menu
3. Turn on the option “Links Open Externally” (This will use the device’s default browser)
Enabling Cookies in Internet Explorer 7, 8 & 9
1. Open the Internet Browser
2. Click Tools > Internet Options > Privacy > Advanced
3. Check Override automatic cookie handling
4. For First-party Cookies and Third-party Cookies click Accept
5. Click OK and OK
Enabling Cookies in Firefox
1. Open the Firefox browser
2. Click Tools > Options > Privacy > Use custom settings for history
3. Check Accept cookies from sites
4. Check Accept third party cookies
5. Select Keep until: they expire
6. Click OK
Enabling Cookies in Google Chrome
1. Open the Google Chrome browser
2. Click Tools > Options > Privacy Options > Under the Hood > Content Settings
3. Check Allow local data to be set
4. Uncheck Block third-party cookies from being set
5. Uncheck Clear cookies
6. Close all
Enabling Cookies in Mobile Safari (iPhone, iPad)
1. Go to the Home screen by pressing the Home button or by unlocking your phone/iPad
2. Select the Settings icon.
3. Select Safari from the settings menu.
4. Select ‘accept cookies’ from the safari menu.
5. Select ‘from visited’ from the accept cookies menu.
6. Press the home button to return the the iPhone home screen.
7. Select the Safari icon to return to Safari.
8. Before the cookie settings change will take effect, Safari must restart. To restart Safari press and hold the Home button (for around five seconds) until the iPhone/iPad display goes blank and the home screen appears.
9. Select the Safari icon to return to Safari.
| 2024-11-08T10:33:15 | en | train |
10,821,893 | xCathedra | 2016-01-01T14:03:27 | Automation should be like Iron Man, not Ultron | null | http://queue.acm.org/detail.cfm?id=2841313 | 4 | 0 | null | null | null | no_error | Automation Should Be Like Iron Man, Not Ultron | null | null |
Everything Sysadmin - @YesThatTom
October 31, 2015Volume 13, issue 8
PDF
The "Leftover Principle" Requires Increasingly More Highly-skilled Humans.
Thomas A. Limoncelli
Q: Dear Tom: A few years ago we automated a major process in our system administration team. Now the system is impossible to debug. Nobody remembers the old manual process and the automation is beyond what any of us can understand. We feel like we've painted ourselves into a corner. Is all operations automation doomed to be this way?
A: The problem seems to be that this automation was written to be like Ultron, not Iron Man.
Iron Man's exoskeleton takes the abilities that Tony Stark has and accentuates them. Tony is a smart, strong guy. He can calculate power and trajectory on his own. However, by having his exoskeleton do this for him, he can focus on other things. Of course, if he disagrees or wants to do something the program wasn't coded to do, he can override the trajectory.
Ultron, on the other hand, was intended to be fully autonomous. It did everything and was, basically, so complex that when it had to be debugged the only choice was (spoiler alert!) to destroy it.
Had the screenwriter/director Joss Whedon consulted me (and Joss, if you are reading this, you really should have), I would have found a way to insert the famous Brian Kernighan quote, "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it."
The Source of the Problem:
Before we talk about how to prevent this kind of situation, we should discuss how we get into it.
The first way we get into this trap is by automating the easy parts and leaving the rest to be done manually. This sounds like the obvious way to automate things and, in fact, is something I generally encouraged until my awareness was raised by John Allspaw's excellent two-part blog post "A Mature Role for Automation" (http://www.kitchensoap.com/2012/09/21/a-mature-role-for-automation-part-i).
You certainly shouldn't automate the difficult cases first. What we learn while automating the easy cases makes us better prepared to automate the more difficult cases. This is called the Leftover Principle. You automate the easy parts and what is "left over" is done by humans.
In the long run this creates a very serious problem. The work left over for people to do becomes, by definition, more difficult. At the start of the process, people were doing a mixture of simple and complex tasks. After a while the mix shifts more and more towards the complex. This is a problem because people aren't getting smarter over time. Moore's Law predicts that computers will get more powerful over time, but sadly there is no such prediction about people.
Another reason the work becomes more difficult is that it becomes rarer. Easier work, done frequently, keeps a person's skills fresh and keeps us ready for the rare but difficult tasks.
Taken to its logical conclusion, this paradigm results in a need to employ impossibly smart people to do impossibly difficult work. Maybe this is why Google's recruiters sound so painfully desperate when they call about joining their SRE team.
One way to avoid the problems of the leftover principle is called the Compensatory Principle. There are certain tasks that people are good at that machines don't do well. Likewise there are other tasks that machines are good at that people don't do well. The compensatory principle says that people and machines should each do what they are good at and not attempt what they don't do well. That is, each group should compensate for the other's deficiencies.
Machines don't get bored, so they are better at repetitive tasks. They don't sleep, so they are better at tasks that must be done at all hours of the night. They are better at handling many operations at once, and at operations that require smooth or precise motion. They are better at literal reproduction, access restriction, and quantitative assessment.
People are better at improvisation and being flexible, exercising judgment, and coping with variations in written material, perceiving feelings.
Let's apply this principle to a monitoring system. The monitoring system collects metrics every five minutes, stores them, and then analyzes the data for the purposes of alerting, debugging, visualization, and interpretation.
A person could collect data about a system every five minutes, and with multiple shifts of workers they could do it around the clock. However, the people would become bored and sloppy. Therefore it is obvious that the data collection should be automated. Alerting requires precision, which is also best done by computers. However, while the computer is better at visualizing the data, people are better at interpreting those visualizations. Debugging requires improvisation, another human skill, so again people are assigned those tasks.
John Allspaw points out that only rarely can a project be broken down into such clear-cut cases of functionality this way.
Doing Better
A better way is to base automation decisions on the complementarity principle. This principle looks at automation from the human perspective. It improves the long-term results by considering how people's behavior will change as a result of automation.
For example, the people planning the automation should consider what is learned over time by doing the process manually and how that would be changed or reduced if the process was automated. When a person first learns a task, they are focused on the basic functions needed to achieve the goal. However, over time, they understand the ecosystem that surrounds the process and gain a big-picture view. This lets them perform global optimizations. When a process is automated the automation encapsulates learning thus far, permitting new people to perform the task without having to experience that learning. This stunts or prevents future learning. This kind of analysis is part of a cognitive systems engineering (CSE) approach.
The complementarity principle combines CSE with a joint cognitive system (JCS) approach. JCS examines how automation and people work together. A joint cognitive system is characterized by its ability to stay in control of a situation.
In other words, if you look at a highly automated system and think, "Isn't it beautiful? We have no idea how it works," you may be using the leftover principle. If you look at it and say, "Isn't it beautiful how we learn and grow together, sharing control over the system," then you've done a good job of applying the complementarity principle.
Designing automation using the complementarity principle is a relatively new concept and I admit I'm no expert, though I can look back at past projects and see where success has come from applying this principle by accident. Even the blind squirrel finds some acorns!
For example, I used to be on a team that maintained a very large (for its day) cloud infrastructure. We were responsible for the hundreds of physical machines that supported thousands of virtual machines.
We needed to automate the process of repairing the physical machines. When there was a hardware problem, virtual machines had to be moved off the physical machine, the machine had to be diagnosed, and a request for repairs had to be sent to the hardware techs in the data center. After the machine was fixed, it needed to be re-integrated into the cloud.
The automation we created abided by the complementarity principle. It was a partnership between human and machine. It did not limit our ability to learn and grow. The control over the system was shared between the automation and the humans involved.
In other words, rather than creating a system that took over the cluster and ran it, we created one that partnered with humans to take care of most of the work. It did its job autonomously, but we did not step on each other's toes.
The automation had two parts. The first part was a set of tools that the team used to do the various related tasks. Only after these tools had been working for some time did we build a system that automated the global process, and it did so more like an exoskeleton assistant than like a dictator.
The repair process was functionally decomposed into five major tasks, and one tool was written to handle each of them. The tools were (a) Evacuation: any virtual machines running on the physical machine needed to be migrated live to a different machine; (b) Revivification: an evacuation process required during the extreme case where a virtual machine had to be restarted from its last snapshot; (c) Recovery: attempts to get the machine working again by simple means such as powering it off and on again; (d) Send to Repair Depot: generate a work order describing what needs to be fixed and send this information to the data center technicians who actually fixed the machine; (e) Re- assimilate: once the machine has been repaired, configure it and re-introduce it to the service.
As the tools were completed, they replaced their respective manual processes. However the tools provided extensive visibility as to what they were doing and why.
The next step was to build automation that could bring all these tools together. The automation was designed based on a few specific principles:
• It should follow the same methodology as the human team members.
• It should use the same tools as the human team members.
• If another team member was doing administrative work on a machine or cluster (group of machines), the automation would step out of the way if asked, just like a human team member would.
• Like a good team member, if it got confused it would back off and ask other members of the team for help.
The automation was a state-machine-driven repair system. Each physical machine was in a particular state: normal, in trouble, recovery in progress, sent for repairs, being re-assimilated, and so on. The monitoring system that would normally page people when there was a problem instead alerted our automation. Based on whether the alerting system had news of a machine having problems, being dead, or returning to life, the appropriate tool was activated. The tool's result determined the new state assigned to the machine.
If the automation got confused, it paused its work on that machine and asked a human for help by opening a ticket in our request tracking system.
If a human team member was doing manual maintenance on a machine, the automation was told to not touch the machine in an analogous way to how human team members would be, except people could now type a command instead of shouting to their coworkers in the surrounding cubicles.
The automation was very successful. Previously whoever was on call was paged once or twice a day. Now we were typically paged less than once a week.
Because of the design, the human team members continued to be involved in the system enough that they were always learning. Some people focused on making the tools better. Others focused on improving the software release and test process.
As stated earlier, one problem with the leftover principle is that the work left over for humans requires increasingly higher skill levels. At times we experienced the opposite! As the number of leftover tasks was reduced, it was easier to wrap our brains around the ones that remained. Without the mental clutter of so many other tasks, we were better able to assess the remaining tasks. For example, the most highly technical task involved a particularly heroic recovery procedure. We re-evaluated whether or not we should even be doing this particular procedure. We shouldn't.
The heroic approach risked data loss in an effort to avoid rebooting a virtual machine. This was the wrong priority. Our customers cared much more about data loss than about a quick reboot. We actually eliminated this leftover task by replacing it with an existing procedure that was already automated. We would not have seen this opportunity if our minds had still been cluttered with so many other tasks.
Another leftover process was building new clusters or machines. It happened infrequently enough that it was not worthwhile to fully automate. However, we found we could Tom Sawyer the automation into building the cluster for us if we created the right metadata to make it think that all the machines had just returned from repairs. Soon the cluster was built for us.
Processes requiring ad hoc improvisation, creativity, and evaluation were left to people. For example, certifying new models of hardware required improvisation and the ability to act given vague requirements.
The resulting system felt a lot like Iron Man's suit: enhancing our skills and taking care of the minutiae so we could focus on the big picture. One person could do the work of many, and we could do our jobs better thanks to the fact that we had an assistant taking care of the busy work. Learning did not stop because it was a collaborative effort. The automation took care of the boring stuff and the late-night work, and we could focus on the creative work of optimizing and enhancing the system for our customers.
I don't have a formula that will always achieve the benefits of the complementarity principle. However, by paying careful attention to how people's behavior will change as a result of automation and by maintaining shared control over the system, we can build automation that is more Iron Man, less Ultron.
Further Reading
John Allspaw's article "A Mature Role for Automation." (http://www.kitchensoap.com/2012/09/21/a-mature-role-for-automation-part-i).
David Woods and Erik Hollnagel's book Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. Taylor and Francis, Boca Raton, FL, 2005.
Chapter 12 of The Practice of Cloud System Administration, by Thomas A. Linomcelli, Strata R. Chalup, and Christina J. Hogan.(http://the-cloud-book.com)
LOVE IT, HATE IT? LET US KNOW [email protected]
Thomas A. Limoncelli is an author, speaker, and system administrator. He is an SRE at Stack Overflow, Inc in NYC. His books include The Practice of Cloud Administration (the-cloud-book.com) and Time Management for System Administrators. He blogs at EverythingSysadmin.com
© 2015 ACM 1542-7730/15/0500 $10.00
Originally published in Queue vol. 13, no. 8—
Comment on this article in the ACM Digital Library
More related articles:
Catherine Hayes, David Malone - Questioning the Criteria for Evaluating Non-cryptographic Hash Functions
Although cryptographic and non-cryptographic hash functions are everywhere, there seems to be a gap in how they are designed. Lots of criteria exist for cryptographic hashes motivated by various security requirements, but on the non-cryptographic side there is a certain amount of folklore that, despite the long history of hash functions, has not been fully explored. While targeting a uniform distribution makes a lot of sense for real-world datasets, it can be a challenge when confronted by a dataset with particular patterns.
Nicole Forsgren, Eirini Kalliamvakou, Abi Noda, Michaela Greiler, Brian Houck, Margaret-Anne Storey - DevEx in Action
DevEx (developer experience) is garnering increased attention at many software organizations as leaders seek to optimize software delivery amid the backdrop of fiscal tightening and transformational technologies such as AI. Intuitively, there is acceptance among technical leaders that good developer experience enables more effective software delivery and developer happiness. Yet, at many organizations, proposed initiatives and investments to improve DevEx struggle to get buy-in as business stakeholders question the value proposition of improvements.
João Varajão, António Trigo, Miguel Almeida - Low-code Development Productivity
This article aims to provide new insights on the subject by presenting the results of laboratory experiments carried out with code-based, low-code, and extreme low-code technologies to study differences in productivity. Low-code technologies have clearly shown higher levels of productivity, providing strong arguments for low-code to dominate the software development mainstream in the short/medium term. The article reports the procedure and protocols, results, limitations, and opportunities for future research.
Ivar Jacobson, Alistair Cockburn - Use Cases are Essential
While the software industry is a fast-paced and exciting world in which new tools, technologies, and techniques are constantly being developed to serve business and society, it is also forgetful. In its haste for fast-forward motion, it is subject to the whims of fashion and can forget or ignore proven solutions to some of the eternal problems that it faces. Use cases, first introduced in 1986 and popularized later, are one of those proven solutions.
© ACM, Inc. All Rights Reserved.
| 2024-11-08T06:39:50 | en | train |
10,821,922 | kernelv | 2016-01-01T14:13:27 | What Is Going to Happen in 2016 | null | http://avc.com/2016/01/what-is-going-to-happen-in-2016/ | 5 | 0 | null | null | null | no_error | What Is Going To Happen In 2016 | -0001-11-30T00:00:00+00:00 | Fred Wilson |
It’s easier to predict the medium to long term future. We will be able to tell our cars to take us home after a late night of new year’s partying within a decade. I sat next to a life sciences investor at a dinner a couple months ago who told me cancer will be a curable disease within the next decade. As amazing as these things sound, they are coming and soon.
But what will happen this year that we are now in? That’s a bit trickier. But I will take some shots this morning.
Oculus will finally ship the Rift in 2016. Games and other VR apps for the Rift will be released. We just learned that the Touch controller won’t ship with the Rift and is delayed until later in 2016. I believe the initial commercial versions of Oculus technology will underwhelm. The technology has been so hyped and it is hard to live up to that. Games will be the strongest early use case, but not everyone is going to want to put on a headset to play a game. I think VR will only reach its true potential when they figure out how to deploy it in a more natural way.
We will see a new form of wearables take off in 2016. The wrist is not the only place we might want to wear a computer on our bodies. If I had to guess, I would bet on something we wear in or on our ears.
One of the big four will falter in 2016. My guess is Apple. They did not have a great year in 2015 and I’m thinking that it will get worse in 2016.
The FAA regulations on the commercial drone industry will turn out to be a boon for the drone sector, legitimizing drone flights for all sorts of use cases and establishing clear rules for what is acceptable and what is not.
The trend towards publishing inside of social networks (Facebook being the most popular one) will go badly for a number of high profile publishers who won’t be able to monetize as effectively inside social networks and there will be at least one high profile victim of this strategy who will go under as a result.
Time Warner will spin off its HBO business to create a direct competitor to Netflix and the independent HBO will trade at a higher market cap than the entire Time Warner business did pre spinoff.
Bitcoin finally finds a killer app with the emergence of Open Bazaar protocol powered zero take rate marketplaces. (note that OB1, an open bazaar powered service, is a USV portfolio company).
Slack will become so pervasive inside of enterprises that spam will become a problem and third party Slack spam filters will emerge. At the same time, the Slack platform will take off and building Slack bots will become the next big thing in enterprise software.
Donald Trump will be the Republican nominee and he will attack the tech sector for its support of immigrant labor. As a result the tech sector will line up behind Hillary Clinton who will be elected the first woman President.
Markdown mania will hit the venture capital sector as VC firms follow Fidelity’s lead and start aggressively taking down the valuations in their portfolios. Crunchbase will start capturing this valuation data and will become a de-facto “yahoo finance” for the startup sector. Employees will realize their options are underwater and will start leaving tech startups in droves.
Some of these predictions border on the ridiculous and that is somewhat intentional. I think there is an element of truth (or at least possibility) in all of them. And I will come back to this list a year from now and review the results.
Best wishes to everyone for a happy and healthy 2016.
| 2024-11-08T15:57:15 | en | train |
10,822,132 | lkrubner | 2016-01-01T15:47:54 | Deferred in the ‘Burbs | null | http://mikethemadbiologist.com/2015/12/31/deferred-in-the-burbs/ | 19 | 11 | [
10823390,
10823097,
10823154
] | null | null | no_error | Deferred in the ‘Burbs | 2015-12-31T14:59:35+00:00 | Posted on |
There is a the long-term–and completely undiscussed–problem U.S. suburbs face:
Something that’s lurking in the background of the U.S. economy, and which will erupt with a fury in ten years or so is the need to replace suburban infrastructure: underground wires, pipes, and so on. This is something new that most suburbs, unlike cities, haven’t had to confront. A suburb that was built in 1970 is long in the tooth today, and time only makes things worse. No suburbs that I’m aware of ever decided to amortize the future cost of repairs over a forty year period–that would require an increase in property taxes. In fact, many suburbs never even covered the expenses of building new subdivisions, never mind worried about expenses decades down the road….
Once suburbs start having to repair their infrastructure, it’s going to get very expensive to live there…
The problem we will face is how to keep suburbs economically viable, both in terms of infrastructure and quality of life. Part of that will have to involve increasing ‘urbanization’ of the suburbs, while other suburbs will be left to decline. But this, not gentrification (which can be reduced with progressive taxation) is a much more difficult problem. Not only will there be resistance by homeowners to changes, but the very, well, infrastructure of the suburbs doesn’t lend itself to increasing density.
It appears Charles Marohn beat me to the punch (boldface mine):
Marohn primarily takes issue with the financial structure of the suburbs. The amount of tax revenue their low-density setup generates, he says, doesn’t come close to paying for the cost of maintaining the vast and costly infrastructure systems, so the only way to keep the machine going is to keep adding and growing. “The public yield from the suburban development pattern is ridiculously low,” he says. One of the most popular articles on the Strong Towns Web site is a five-part series Marohn wrote likening American suburban development to a giant Ponzi scheme.
…The way suburban development usually works is that a town lays the pipes, plumbing, and infrastructure for housing development—often getting big loans from the government to do so—and soon after a developer appears and offers to build homes on it. Developers usually fund most of the cost of the infrastructure because they make their money back from the sale of the homes. The short-term cost to the city or town, therefore, is very low: it gets a cash infusion from whichever entity fronted the costs, and the city gets to keep all the revenue from property taxes. The thinking is that either taxes will cover the maintenance costs, or the city will keep growing and generate enough future cash flow to cover the obligations. But the tax revenue at low suburban densities isn’t nearly enough to pay the bills; in Marohn’s estimation, property taxes at suburban densities bring in anywhere from 4 cents to 65 cents for every dollar of liability. Most suburban municipalities, he says, are therefore unable to pay the maintenance costs of their infrastructure, let alone replace things when they inevitably wear out after twenty to twenty-five years. The only way to survive is to keep growing or take on more debt, or both. “It is a ridiculously unproductive system,” he says.
Marohn points out that while this has been an issue as long as there have been suburbs, the problem has become more acute with each additional “life cycle” of suburban infrastructure (the point at which the systems need to be replaced—funded by debt, more growth, or both). Most U.S. suburbs are now on their third life cycle, and infrastructure systems have only become more bloated, inefficient, and costly. “When people say we’re living beyond our means, they’re usually talking about a forty-inch TV instead of a twenty-inch TV,” he says. “This is like pennies compared to the dollars we’ve spent on the way we’ve arranged ourselves across the landscape.”
By comparison, urban gentrification is an easy problem (one, of several solutions, to prevent price asset inflation among high end goods is more progressive taxation). Some suburbs will have to be left to die. Others will become impoverished. Others, the fortunate ones, will figure out ways to increase density.
This is going to be really ugly.
This entry was posted in Urban Planning. Bookmark the permalink. | 2024-11-07T22:41:49 | en | train |
10,822,185 | zdw | 2016-01-01T16:04:44 | Would You Put a Little Speaker in Your Vagina (for Your Baby)? | null | http://nymag.com/thecut/2015/12/vagina-speaker.html | 2 | 0 | null | null | null | no_error | Would You Put a Little Speaker in Your Vagina (for Your Baby)? | 2015-12-30T13:05:00.000-05:00 | Kelly Conaboy |
“I love this song!” —Your Baby
Photo: Mediscan/Corbis
Here’s a good question: Would you put a little tampon-style speaker in your vagina? What if I told you — you’re pregnant in this scenario — that, with the tampon-style speaker inserted into your vagina, your in-womb baby could listen to something like, ah, I don’t know, Dead Kennedys? The baby would learn to dislike California governor Jerry Brown in an outdated way, but at least he or she would come out with a healthy dislike of corporations and fascists. Hmm. Something to think about. Anyway, check out this speaker tampon.
The little speaker tampon is called “Babypod” and it’s shown in action, sort of, in a new video from the gynecology clinic Institut Marquès in Barcelona. In the video, singer Soraya performs a set of Christmas carols for expecting mothers (equipped with Babypods) and their unborn babies. “This is the first concert for fetuses ever held in the world” the video boasts multiple times, even though I highly doubt they did much research before making this claim.
The Babypod came about after a Spanish study proved fetuses are able to detect sound once they reach between 18 and 26 weeks. The fetuses apparently even sometimes move their mouths and tongues in response to the sound, which seems odd to me. Huh.
The Babypod is reportedly set at a rather quiet volume of 54 decibels, and it’s allegedly good for the fetus because of something about brain development on which experts don’t really agree.
So there you have it. Vaginal speaker — for your baby.
Would You Put a Little Speaker in Your Vagina?
| 2024-11-08T07:33:47 | en | train |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 54